Reliability Assessment of Reconfigurable Flight Control Systems Using Sure and Assist
NASA Technical Reports Server (NTRS)
Wu, N. Eva
1992-01-01
This paper presents a reliability assessment of Reconfigurable Flight Control Systems using Semi-Markov Unreliability Range Evaluator (SURE) and Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST).
NASA Technical Reports Server (NTRS)
Johnson, Sally C.; Boerschlein, David P.
1995-01-01
Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.
ASSIST internals reference manual
NASA Technical Reports Server (NTRS)
Johnson, Sally C.; Boerschlein, David P.
1994-01-01
The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) program was developed at NASA LaRC in order to analyze the reliability of virtually any fault-tolerant system. A user manual was developed to detail its use. Certain technical specifics are of no concern to the end user, yet are of importance to those who must maintain and/or verify the correctness of the tool. This document takes a detailed look into these technical issues.
NASA Technical Reports Server (NTRS)
Johnson, Sally C.; Boerschlein, David P.
1994-01-01
Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all of the states and transitions in the model of a complex system can be devastatingly tedious and error-prone. Even with tools such as the Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST), the user must describe a system by specifying the rules governing the behavior of the system in order to generate the model. With the Table Oriented Translator to the ASSIST Language (TOTAL), the user can specify the components of a typical system and their attributes in the form of a table. The conditions that lead to system failure are also listed in a tabular form. The user can also abstractly specify dependencies with causes and effects. The level of information required is appropriate for system designers with little or no background in the details of reliability calculations. A menu-driven interface guides the user through the system description process, and the program updates the tables as new information is entered. The TOTAL program automatically generates an ASSIST input description to match the system description.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
Indexed semi-Markov process for wind speed modeling.
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy 28 (2003) 1787-1802.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottam, Joseph A.; Blaha, Leslie M.
Systems have biases. Their interfaces naturally guide a user toward specific patterns of action. For example, modern word-processors and spreadsheets are both capable of taking word wrapping, checking spelling, storing tables, and calculating formulas. You could write a paper in a spreadsheet or could do simple business modeling in a word-processor. However, their interfaces naturally communicate which function they are designed for. Visual analytic interfaces also have biases. In this paper, we outline why simple Markov models are a plausible tool for investigating that bias and how they might be applied. We also discuss some anticipated difficulties in such modelingmore » and touch briefly on what some Markov model extensions might provide.« less
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
Modelisation de l'historique d'operation de groupes turbine-alternateur
NASA Astrophysics Data System (ADS)
Szczota, Mickael
Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap
Modeling of dialogue regimes of distance robot control
NASA Astrophysics Data System (ADS)
Larkin, E. V.; Privalov, A. N.
2017-02-01
Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.
Numerical research of the optimal control problem in the semi-Markov inventory model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Markov and semi-Markov processes as a failure rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabski, Franciszek
2016-06-08
In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.
First and second order semi-Markov chains for wind speed modeling
NASA Astrophysics Data System (ADS)
Prattico, F.; Petroni, F.; D'Amico, G.
2012-04-01
The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [3] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [1], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy, 28/2003 1787-1802. [2] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30/2005 693-708. [3] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29/2004, 1407-1418.
A reward semi-Markov process with memory for wind speed modeling
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains
Meyer, Denny; Forbes, Don; Clarke, Stephen R.
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.
Meyer, Denny; Forbes, Don; Clarke, Stephen R
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando
This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.
Semi-Markov Approach to the Shipping Safety Modelling
NASA Astrophysics Data System (ADS)
Guze, Sambor; Smolarek, Leszek
2012-02-01
In the paper the navigational safety model of a ship on the open area has been studied under conditions of incomplete information. Moreover the structure of semi-Markov processes is used to analyse the stochastic ship safety according to the subjective acceptance of risk by the navigator. In addition, the navigator’s behaviour can be analysed by using the numerical simulation to estimate the probability of collision in the safety model.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
Estimation in a semi-Markov transformation model
Dabrowska, Dorota M.
2012-01-01
Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583
Surgical gesture segmentation and recognition.
Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René
2013-01-01
Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1986-01-01
Semi-Markov models can be used to compute the reliability of virtually any fault-tolerant system. However, the process of delineating all of the states and transitions in a model of a complex system can be devastingly tedious and error-prone. The ASSIST program allows the user to describe the semi-Markov model in a high-level language. Instead of specifying the individual states of the model, the user specifies the rules governing the behavior of the system and these are used by ASSIST to automatically generate the model. The ASSIST program is described and illustrated by examples.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Markov chains and semi-Markov models in time-to-event analysis.
Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J
2013-10-25
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.
Markov chains and semi-Markov models in time-to-event analysis
Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.
2014-01-01
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Wei, Shaoceng; Kryscio, Richard J.
2015-01-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001
Wei, Shaoceng; Kryscio, Richard J
2016-12-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.
ERIC Educational Resources Information Center
Wollmer, Richard D.; Bond, Nicholas A.
Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…
Semi-Markov Models for Degradation-Based Reliability
2010-01-01
standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
Identifying ontogenetic, environmental and individual components of forest tree growth
Chaubert-Pereira, Florence; Caraglio, Yves; Lavergne, Christian; Guédon, Yann
2009-01-01
Background and Aims This study aimed to identify and characterize the ontogenetic, environmental and individual components of forest tree growth. In the proposed approach, the tree growth data typically correspond to the retrospective measurement of annual shoot characteristics (e.g. length) along the trunk. Methods Dedicated statistical models (semi-Markov switching linear mixed models) were applied to data sets of Corsican pine and sessile oak. In the semi-Markov switching linear mixed models estimated from these data sets, the underlying semi-Markov chain represents both the succession of growth phases and their lengths, while the linear mixed models represent both the influence of climatic factors and the inter-individual heterogeneity within each growth phase. Key Results On the basis of these integrative statistical models, it is shown that growth phases are not only defined by average growth level but also by growth fluctuation amplitudes in response to climatic factors and inter-individual heterogeneity and that the individual tree status within the population may change between phases. Species plasticity affected the response to climatic factors while tree origin, sampling strategy and silvicultural interventions impacted inter-individual heterogeneity. Conclusions The transposition of the proposed integrative statistical modelling approach to cambial growth in relation to climatic factors and the study of the relationship between apical growth and cambial growth constitute the next steps in this research. PMID:19684021
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
Liu, An-An; Li, Kang; Kanade, Takeo
2012-02-01
We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ± 1.29 frames was achieved for locating daughter cell birth events.
Maritime Threat Detection using Plan Recognition
2012-11-01
logic with a probabilistic interpretation to represent expert domain knowledge [13]. We used Alchemy [14] to implement MLN-BR. It interfaces with the...Domingos, P., & Lowd, D. (2009). Markov logic: An interface layer for AI. Morgan & Claypool. [14] Alchemy (2011). Alchemy ─ Open source AI. [http
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
Relaxation, Structure and Properties of Semi-coherent Interfaces
Shao, Shuai; Wang, Jian
2015-11-05
Materials containing high density of interfaces are promising candidates for future energy technologies, because interfaces acting as sources, sinks, and barriers for defects can improve mechanical and irradiation properties of materials. Semi-coherent interface widely occurring in various materials is composed of a network of misfit dislocations and coherent regions separated by misfit dislocations. Lastly, in this article, we review relaxation mechanisms, structure and properties of (111) semi-coherent interfaces in face centered cubic structures.
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot
NASA Technical Reports Server (NTRS)
Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory
2002-01-01
Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.
NASA Astrophysics Data System (ADS)
Power, Sarah D.; Falk, Tiago H.; Chau, Tom
2010-04-01
Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI). In particular, previous research has shown that NIRS signals recorded from the motor cortex during left- and right-hand imagery can be distinguished, providing a basis for a two-choice NIRS-BCI. In this study, we investigated the feasibility of an alternative two-choice NIRS-BCI paradigm based on the classification of prefrontal activity due to two cognitive tasks, specifically mental arithmetic and music imagery. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while ten able-bodied adults performed mental arithmetic and music imagery within a synchronous shape-matching paradigm. With the 18 filtered AC signals, we created task- and subject-specific maximum likelihood classifiers using hidden Markov models. Mental arithmetic and music imagery were classified with an average accuracy of 77.2% ± 7.0 across participants, with all participants significantly exceeding chance accuracies. The results suggest the potential of a two-choice NIRS-BCI based on cognitive rather than motor tasks.
On the stochastic dissemination of faults in an admissible network
NASA Technical Reports Server (NTRS)
Kyrala, A.
1987-01-01
The dynamic distribution of faults in a general type network is discussed. The starting point is a uniquely branched network in which each pair of nodes is connected by a single branch. Mathematical expressions for the uniquely branched network transition matrix are derived to show that sufficient stationarity exists to ensure the validity of the use of the Markov Chain model to analyze networks. In addition the conditions for the use of Semi-Markov models are discussed. General mathematical expressions are derived in an examination of branch redundancy techniques commonly used to increase reliability.
NASA Astrophysics Data System (ADS)
Baek, Sangkyu; Choi, Bong Dae
We investigate power consumption of a mobile station with the power saving class of type 1 in the IEEE 802.16e. We deal with stochastic behavior of mobile station during not only sleep mode period but also awake mode period with both downlink and uplink traffics. Our methods for investigating the power saving class of type 1 are to construct the embedded Markov chain and the semi-Markov chain generated by the embedded Markov chain. To see the effect of the sleep mode, we obtain the average power consumption of a mobile station and the mean queueing delay of a message. Numerical results show that the larger size of the sleep window makes the power consumption of a mobile station smaller and the queueing delay of a downlink message longer.
Griffin, William A.; Li, Xun
2016-01-01
Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Rojas, Mario; Ponce, Pedro; Molina, Arturo
2016-08-01
This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Lam, Adam; Brantley, Patrick; Malvagi, Fausto; Palmer, Todd; Zoia, Andrea
2018-01-01
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore-Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using this algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
Larmier, Coline; Lam, Adam; Brantley, Patrick; ...
2017-09-27
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmier, Coline; Lam, Adam; Brantley, Patrick
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
NASA Astrophysics Data System (ADS)
Ullah Manzoor, Habib; Manzoor, Tareq; Hussain, Masroor; Manzoor, Sanaullah; Nazar, Kashif
2018-04-01
Surface electromagnetic waves are the solution of Maxwell’s frequency domain equations at the interface of two dissimilar materials. In this article, two canonical boundary-value problems have been formulated to analyze the multiplicity of electromagnetic surface waves at the interface between two dissimilar materials in the visible region of light. In the first problem, the interface between two semi-infinite rugate filters having symmetric refractive index profiles is considered and in the second problem, to enhance the multiplicity of surface electromagnetic waves, a homogeneous dielectric slab of 400 nm is included between two semi-infinite symmetric rugate filters. Numerical results show that multiple Bloch surface waves of different phase speeds, different polarization states, different degrees of localization and different field profiles are propagated at the interface between two semi-infinite rugate filters. Having two interfaces when a homogeneous dielectric layer is placed between two semi-infinite rugate filters has increased the multiplicity of electromagnetic surface waves.
Electronic structure and relative stability of the coherent and semi-coherent HfO2/III-V interfaces
NASA Astrophysics Data System (ADS)
Lahti, A.; Levämäki, H.; Mäkelä, J.; Tuominen, M.; Yasir, M.; Dahl, J.; Kuzmin, M.; Laukkanen, P.; Kokko, K.; Punkkinen, M. P. J.
2018-01-01
III-V semiconductors are prominent alternatives to silicon in metal oxide semiconductor devices. Hafnium dioxide (HfO2) is a promising oxide with a high dielectric constant to replace silicon dioxide (SiO2). The potentiality of the oxide/III-V semiconductor interfaces is diminished due to high density of defects leading to the Fermi level pinning. The character of the harmful defects has been intensively debated. It is very important to understand thermodynamics and atomic structures of the interfaces to interpret experiments and design methods to reduce the defect density. Various realistic gap defect state free models for the HfO2/III-V(100) interfaces are presented. Relative energies of several coherent and semi-coherent oxide/III-V semiconductor interfaces are determined for the first time. The coherent and semi-coherent interfaces represent the main interface types, based on the Ga-O bridges and As (P) dimers, respectively.
Device Control Using Gestures Sensed from EMG
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.
2003-01-01
In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.
A general graphical user interface for automatic reliability modeling
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1991-01-01
Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.
A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.
Hocraffer, Amy; Nam, Chang S
2017-01-01
A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas
2010-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1
Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.
Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego
2017-01-01
Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.
Myokit: A simple interface to cardiac cellular electrophysiology.
Clerx, Michael; Collins, Pieter; de Lange, Enno; Volders, Paul G A
2016-01-01
Myokit is a new powerful and versatile software tool for modeling and simulation of cardiac cellular electrophysiology. Myokit consists of an easy-to-read modeling language, a graphical user interface, single and multi-cell simulation engines and a library of advanced analysis tools accessible through a Python interface. Models can be loaded from Myokit's native file format or imported from CellML. Model export is provided to C, MATLAB, CellML, CUDA and OpenCL. Patch-clamp data can be imported and used to estimate model parameters. In this paper, we review existing tools to simulate the cardiac cellular action potential to find that current tools do not cater specifically to model development and that there is a gap between easy-to-use but limited software and powerful tools that require strong programming skills from their users. We then describe Myokit's capabilities, focusing on its model description language, simulation engines and import/export facilities in detail. Using three examples, we show how Myokit can be used for clinically relevant investigations, multi-model testing and parameter estimation in Markov models, all with minimal programming effort from the user. This way, Myokit bridges a gap between performance, versatility and user-friendliness. Copyright © 2015 Elsevier Ltd. All rights reserved.
pytc: Open-Source Python Software for Global Analyses of Isothermal Titration Calorimetry Data.
Duvvuri, Hiranmayi; Wheeler, Lucas C; Harms, Michael J
2018-05-08
Here we describe pytc, an open-source Python package for global fits of thermodynamic models to multiple isothermal titration calorimetry experiments. Key features include simplicity, the ability to implement new thermodynamic models, a robust maximum likelihood fitter, a fast Bayesian Markov-Chain Monte Carlo sampler, rigorous implementation, extensive documentation, and full cross-platform compatibility. pytc fitting can be done using an application program interface or via a graphical user interface. It is available for download at https://github.com/harmslab/pytc .
Eternal non-Markovianity: from random unitary to Markov chain realisations.
Megier, Nina; Chruściński, Dariusz; Piilo, Jyrki; Strunz, Walter T
2017-07-25
The theoretical description of quantum dynamics in an intriguing way does not necessarily imply the underlying dynamics is indeed intriguing. Here we show how a known very interesting master equation with an always negative decay rate [eternal non-Markovianity (ENM)] arises from simple stochastic Schrödinger dynamics (random unitary dynamics). Equivalently, it may be seen as arising from a mixture of Markov (semi-group) open system dynamics. Both these approaches lead to a more general family of CPT maps, characterized by a point within a parameter triangle. Our results show how ENM quantum dynamics can be realised easily in the laboratory. Moreover, we find a quantum time-continuously measured (quantum trajectory) realisation of the dynamics of the ENM master equation based on unitary transformations and projective measurements in an extended Hilbert space, guided by a classical Markov process. Furthermore, a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) representation of the dynamics in an extended Hilbert space can be found, with a remarkable property: there is no dynamics in the ancilla state. Finally, analogous constructions for two qubits extend these results from non-CP-divisible to non-P-divisible dynamics.
Predictive Interfaces for Long-Distance Tele-Operations
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas
2005-01-01
We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.
Multiple imputation for estimating the risk of developing dementia and its impact on survival.
Yu, Binbing; Saczynski, Jane S; Launer, Lenore
2010-10-01
Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less
Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media
NASA Astrophysics Data System (ADS)
Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.
2017-12-01
The transport of fluids in porous media is dominated by flow-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of the plume in two-dimensional problems.
TaggerOne: joint named entity recognition and normalization with semi-Markov Models
Leaman, Robert; Lu, Zhiyong
2016-01-01
Motivation: Text mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization. Methods: We propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput. Results: We validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance. Availability and Implementation: The TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone Contact: zhiyong.lu@nih.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27283952
TaggerOne: joint named entity recognition and normalization with semi-Markov Models.
Leaman, Robert; Lu, Zhiyong
2016-09-15
Text mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization. We propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput. We validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance. The TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone zhiyong.lu@nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Qi, Hui; Zhang, Xi-meng
2017-10-01
With the aid of the Green function method and image method, the problem of scattering of SH-wave by a semi-cylindrical salient near vertical interface in bi-material half-space is considered to obtain its steady state response. Firstly, by the means of the image method, Green function which is the essential solution of displacement field is constructed to satisfy the stress-free condition on the horizontal boundary in a right-angle space including a semi-cylindrical salient and bearing a harmonic out-of-plane line source force at any point on the vertical boundary. Secondly, the bi-material is separated into two parts along the vertical interface, then unknown anti-plane forces are applied on the vertical interface, and according to the continuity condition, the first kind of Fredholm integral equations is established to determine unknown anti-plane forces by "the conjunction method", then the integral equations are reduced to the linear algebraic equations by effective truncation. Finally, the dynamic stress concentration factor (DSCF) around the edge of semi-cylindrical salient is calculated, and the influences of incident wave number, incident angle, effect of interface and different combination of material parameters, etc. on DSCF are discussed.
Shi, Xu; Barnes, Robert O; Chen, Li; Shajahan-Haq, Ayesha N; Hilakivi-Clarke, Leena; Clarke, Robert; Wang, Yue; Xuan, Jianhua
2015-07-15
Identification of protein interaction subnetworks is an important step to help us understand complex molecular mechanisms in cancer. In this paper, we develop a BMRF-Net package, implemented in Java and C++, to identify protein interaction subnetworks based on a bagging Markov random field (BMRF) framework. By integrating gene expression data and protein-protein interaction data, this software tool can be used to identify biologically meaningful subnetworks. A user friendly graphic user interface is developed as a Cytoscape plugin for the BMRF-Net software to deal with the input/output interface. The detailed structure of the identified networks can be visualized in Cytoscape conveniently. The BMRF-Net package has been applied to breast cancer data to identify significant subnetworks related to breast cancer recurrence. The BMRF-Net package is available at http://sourceforge.net/projects/bmrfcjava/. The package is tested under Ubuntu 12.04 (64-bit), Java 7, glibc 2.15 and Cytoscape 3.1.0. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Testing the Adequacy of a Semi-Markov Process
2015-09-17
emphasis on covariate detection. 49 VI. BMI Model Analysis According to the Centers of Disease Control and Prevention (CDC) website [24], childhood obesity ...identified an increasing trend over a span of merely four years. Previous studies have drawn correlations between childhood obesity and various childhood and...started in recent years targeting poor dieting and lack of exercise, two of the primary causal factors in childhood obesity according to the CDC
Hettle, Robert; Posnett, John; Borrill, John
2015-01-01
The aim of this paper is to describe a four health-state, semi-Markov model structure with health states defined by initiation of subsequent treatment, designed to make best possible use of the data available from a phase 2 clinical trial. The approach is illustrated using data from a sub-group of patients enrolled in a phase 2 clinical trial of olaparib maintenance therapy in patients with platinum-sensitive relapsed ovarian cancer and a BRCA mutation (NCT00753545). A semi-Markov model was developed with four health states: progression-free survival (PFS), first subsequent treatment (FST), second subsequent treatment (SST), and death. Transition probabilities were estimated by fitting survival curves to trial data for time from randomization to FST, time from FST to SST, and time from SST to death. Survival projections generated by the model are broadly consistent with the outcomes observed in the clinical trial. However, limitations of the trial data (small sample size, immaturity of the PFS and overall survival [OS] end-points, and treatment switching) create uncertainty in estimates of survival. The model framework offers a promising approach to evaluating cost-effectiveness of a maintenance therapy for patients with cancer, which may be generalizable to other chronic diseases.
NASA Astrophysics Data System (ADS)
Chen, Junhua
2013-03-01
To cope with a large amount of data in current sensed environments, decision aid tools should provide their understanding of situations in a time-efficient manner, so there is an increasing need for real-time network security situation awareness and threat assessment. In this study, the state transition model of vulnerability in the network based on semi-Markov process is proposed at first. Once events are triggered by an attacker's action or system response, the current states of the vulnerabilities are known. Then we calculate the transition probabilities of the vulnerability from the current state to security failure state. Furthermore in order to improve accuracy of our algorithms, we adjust the probabilities that they exploit the vulnerability according to the attacker's skill level. In the light of the preconditions and post-conditions of vulnerabilities in the network, attack graph is built to visualize security situation in real time. Subsequently, we predict attack path, recognize attack intention and estimate the impact through analysis of attack graph. These help administrators to insight into intrusion steps, determine security state and assess threat. Finally testing in a network shows that this method is reasonable and feasible, and can undertake tremendous analysis task to facilitate administrators' work.
A flowgraph model for bladder carcinoma
2014-01-01
Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066
ADM guidance-Ceramics: all-ceramic multilayer interfaces in dentistry.
Lohbauer, Ulrich; Scherrer, Susanne S; Della Bona, Alvaro; Tholey, Michael; van Noort, Richard; Vichi, Alessandro; Kelly, J Robert; Cesar, Paulo F
2017-06-01
This guidance document describes the specific issues involved in dental multilayer ceramic systems. The material interactions with regard to specific thermal and mechanical properties are reviewed and the characteristics of dental tooth-shaped processing parameters (sintering, geometry, thickness ratio, etc.) are discussed. Several techniques for the measurement of bond quality and residual stresses are presented with a detailed discussion of advantages and disadvantages. In essence no single technique is able to describe adequately the all-ceramic interface. Invasive or semi-invasive methods have been shown to distort the information regarding the residual stress state while non-invasive methods are limited due to resolution, field of focus or working depth. This guidance document has endeavored to provide a scientific basis for future research aimed at characterizing the ceramic interface of dental restorations. Along with the methodological discussion it is seeking to provide an introduction and guidance to relatively inexperienced researchers. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Relaxation mechanisms, structure and properties of semi-coherent interfaces
Shao, Shuai; Wang, Jian
2015-10-15
In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less
One-Handed Thumb Use on Smart Phones by Semi-literate and Illiterate Users in India
NASA Astrophysics Data System (ADS)
Katre, Dinesh
There is a tremendous potential for developing mobile-based productivity tools and occupation specific applications for the semi-literate and illiterate users in India. One-handed thumb use on the touchscreen of smart phone or touch phone is considered as an effective alternative than the use of stylus or index finger, to free the other hand for supporting the occupational activity. In this context, usability research and experimental tests are conducted to understand the role of fine motor control, usability of thumb as the interaction apparatus and the ergonomic needs of users. The paper also touches upon cultural, racial and anthropometric aspects, which need due consideration while designing the mobile interface. Design recommendations are evolved to enhance the effectiveness of one-handed thumb use on smart phone, especially for the benefit of semi-literate and illiterate users.
Fabrication of Ohmic contact on semi-insulating 4H-SiC substrate by laser thermal annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yue; Lu, Wu-yue; Wang, Tao
The Ni contact layer was deposited on semi-insulating 4H-SiC substrate by magnetron sputtering. The as-deposited samples were treated by rapid thermal annealing (RTA) and KrF excimer laser thermal annealing (LTA), respectively. The RTA annealed sample is rectifying while the LTA sample is Ohmic. The specific contact resistance (ρ{sub c}) is 1.97 × 10{sup −3} Ω·cm{sup 2}, which was determined by the circular transmission line model. High resolution transmission electron microscopy morphologies and selected area electron diffraction patterns demonstrate that the 3C-SiC transition zone is formed in the near-interface region of the SiC after the as-deposited sample is treated by LTA,more » which is responsible for the Ohmic contact formation in the semi-insulating 4H-SiC.« less
PySeqLab: an open source Python package for sequence labeling and segmentation.
Allam, Ahmed; Krauthammer, Michael
2017-11-01
Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.
Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S
2018-01-01
In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.
NASA Technical Reports Server (NTRS)
Trivedi, K. S.; Geist, R. M.
1981-01-01
The CARE 3 reliability model for aircraft avionics and control systems is described by utilizing a number of examples which frequently use state-of-the-art mathematical modeling techniques as a basis for their exposition. Behavioral decomposition followed by aggregration were used in an attempt to deal with reliability models with a large number of states. A comprehensive set of models of the fault-handling processes in a typical fault-tolerant system was used. These models were semi-Markov in nature, thus removing the usual restrictions of exponential holding times within the coverage model. The aggregate model is a non-homogeneous Markov chain, thus allowing the times to failure to posses Weibull-like distributions. Because of the departures from traditional models, the solution method employed is that of Kolmogorov integral equations, which are evaluated numerically.
Motion of kinesin in a viscoelastic medium
NASA Astrophysics Data System (ADS)
Knoops, Gert; Vanderzande, Carlo
2018-05-01
Kinesin is a molecular motor that transports cargo along microtubules. The results of many in vitro experiments on kinesin-1 are described by kinetic models in which one transition corresponds to the forward motion and subsequent binding of the tethered motor head. We argue that in a viscoelastic medium like the cytosol of a cell this step is not Markov and has to be described by a nonexponential waiting time distribution. We introduce a semi-Markov kinetic model for kinesin that takes this effect into account. We calculate, for arbitrary waiting time distributions, the moment generating function of the number of steps made, and determine from this the average velocity and the diffusion constant of the motor. We illustrate our results for the case of a waiting time distribution that is Weibull. We find that for realistic parameter values, viscoelasticity decreases the velocity and the diffusion constant, but increases the randomness (or Fano factor).
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Photoreflectance from GaAs and GaAs/GaAs interfaces
NASA Astrophysics Data System (ADS)
Sydor, Michael; Angelo, James; Wilson, Jerome J.; Mitchel, W. C.; Yen, M. Y.
1989-10-01
Photoreflectance from semi-insulating GaAs, and GaAs/GaAs interfaces, is discussed in terms of its behavior with temperature, doping, epilayer thickness, and laser intensity. Semi-insulating substrates show an exciton-related band-edge signal below 200 K and an impurity-related photoreflectance above 400 K. At intermediate temperatures the band-edge signal from thin GaAs epilayers contains a contribution from the epilayer-substrate interface. The interface effect depends on the epilayer's thickness, doping, and carrier mobility. The effect broadens the band-edge photoreflectance by 5-10 meV, and artifically lowers the estimates for the critical-point energy, ECP, obtained through the customary third-derivative functional fit to the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shutthanandan, Vaithiyalingam; Choudhury, Samrat; Manandhar, Sandeep
To understand how variations in interface properties such as misfit-dislocation density and local chemistry affect radiation-induced defect absorption and recombination, we have explored a model system of CrxV1-x alloy epitaxial films deposited on MgO single crystals. By controlling film composition, the lattice mismatch with MgO was adjusted so that the misfit-dislocation density varies at the interface. These interfaces were exposed to irradiation and in situ results show that the film with a semi-coherent interface (Cr) withstands irradiation while V film, which has similar semi-coherent interface like Cr, showed the largest damage. Theoretical calculations indicate that, unlike at metal/metal interfaces, themore » misfit dislocation density does not dominate radiation damage tolerance at metal/oxide interfaces. Rather, the stoichiometry, and the precise location of the misfit-dislocation density relative to the interface, drives defect behavior. Together, these results demonstrate the sensitivity of defect recombination to interfacial chemistry and provide new avenues for engineering radiation-tolerant nanomaterials.« less
Glide dislocation nucleation from dislocation nodes at semi-coherent {111} Cu–Ni interfaces
Shao, Shuai; Wang, Jian; Beyerlein, Irene J.; ...
2015-07-23
Using atomistic simulations and dislocation theory on a model system of semi-coherent {1 1 1} interfaces, we show that misfit dislocation nodes adopt multiple atomic arrangements corresponding to the creation and redistribution of excess volume at the nodes. We identified four distinctive node structures: volume-smeared nodes with (i) spiral or (ii) straight dislocation patterns, and volume-condensed nodes with (iii) triangular or (iv) hexagonal dislocation patterns. Volume-smeared nodes contain interfacial dislocations lying in the Cu–Ni interface but volume-condensed nodes contain two sets of interfacial dislocations in the two adjacent interfaces and jogs across the atomic layer between the two adjacent interfaces.more » Finally, under biaxial tension/compression applied parallel to the interface, we show that the nucleation of lattice dislocations is preferred at the nodes and is correlated with the reduction of excess volume at the nodes.« less
AQBE — QBE Style Queries for Archetyped Data
NASA Astrophysics Data System (ADS)
Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash
Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.
Multi-level Operational C2 Holonic Reference Architecture Modeling for MHQ with MOC
2009-06-01
x), x(k), uj(k)) is defined as the task success probability, based on the asset allocation and task execution activities at the tactical level...on outcomes of asset- task allocation at the tactical level. We employ semi-Markov decision process (SMDP) approach to decide on missions to be...AGA) graph for addressing the mission monitoring/ planning issues related to task sequencing and asset allocation at the OLC-TLC layer (coordination
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Revisiting the Al/Al₂O₃ interface: coherent interfaces and misfit accommodation.
Pilania, Ghanshyam; Thijsse, Barend J; Hoagland, Richard G; Lazić, Ivan; Valone, Steven M; Liu, Xiang-Yang
2014-03-27
We study the coherent and semi-coherent Al/α-Al2O3 interfaces using molecular dynamics simulations with a mixed, metallic-ionic atomistic model. For the coherent interfaces, both Al-terminated and O-terminated nonstoichiometric interfaces have been studied and their relative stability has been established. To understand the misfit accommodation at the semi-coherent interface, a 1-dimensional (1D) misfit dislocation model and a 2-dimensional (2D) dislocation network model have been studied. For the latter case, our analysis reveals an interface dislocation structure with a network of three sets of parallel dislocations, each with pure-edge character, giving rise to a pattern of coherent and stacking-fault-like regions at the interface. Structural relaxation at elevated temperatures leads to a further change of the dislocation pattern, which can be understood in terms of a competition between the stacking fault energy and the dislocation interaction energy at the interface. Our results are expected to serve as an input for the subsequent dislocation dynamics models to understand and predict the macroscopic mechanical behavior of Al/α-Al2O3 composite heterostructures.
Hidden Semi-Markov Models and Their Application
NASA Astrophysics Data System (ADS)
Beyreuther, M.; Wassermann, J.
2008-12-01
In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
Electronic properties of in-plane phase engineered 1T'/2H/1T' MoS2
NASA Astrophysics Data System (ADS)
Thakur, Rajesh; Sharma, Munish; Ahluwalia, P. K.; Sharma, Raman
2018-04-01
We present the first principles studies of semi-infinite phase engineered MoS2 along zigzag direction. The semiconducting (2H) and semi-metallic (1T') phases are known to be stable in thin-film MoS2. We described the electronic and structural properties of the infinite array of 1T'/2H/1T'. It has been found that 1T'phase induced semi-metallic character in 2H phase beyond interface but, only Mo atoms in 2H phase domain contribute to the semi-metallic nature and S atoms towards semiconducting state. 1T'/2H/1T' system can act as a typical n-p-n structure. Also high holes concentration at the interface of Mo layer provides further positive potential barriers.
Analysis and design of a second-order digital phase-locked loop
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1979-01-01
A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Modeling Czochralski growth of oxide crystals for piezoelectric and optical applications
NASA Astrophysics Data System (ADS)
Stelian, C.; Duffar, T.
2018-05-01
Numerical modeling is applied to investigate the impact of crystal and crucible rotation on the flow pattern and crystal-melt interface shape in Czochralski growth of oxide semi-transparent crystals used for piezoelectric and optical applications. Two cases are simulated in the present work: the growth of piezoelectric langatate (LGT) crystals of 3 cm in diameter in an inductive furnace, and the growth of sapphire crystals of 10 cm in diameter in a resistive configuration. The numerical results indicate that the interface shape depends essentially on the internal radiative heat exchanges in the semi-transparent crystals. Computations performed by applying crystal/crucible rotation show that the interface can be flattened during LGT growth, while flat-interface growth of large diameter sapphire crystals may not be possible.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
TRU waste lead organization -- WIPP Project Office Interface Management semi-annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, J.V.; Gorton, J.M.
1985-05-01
The Charter establishing the Interface Control Board and the administrative organization to manage the interface of the TRU Waste Lead Organization and the WIPP Project Office also requires preparation of a summary report describing significant interface activities.'' This report includes a discussion of Interface Working Group (IWG) recommendations and resolutions considered and implemented'' over the reporting period October 1984 to March 1985.
Revisiting the Al/Al 2O 3 Interface: Coherent Interfaces and Misfit Accommodation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Thijsse, Barend J.; Hoagland, Richard G.
We report the coherent and semi-coherent Al/α-Al 2O 3 interfaces using molecular dynamics simulations with a mixed, metallic-ionic atomistic model. For the coherent interfaces, both Al-terminated and O-terminated nonstoichiometric interfaces have been studied and their relative stability has been established. To understand the misfit accommodation at the semi-coherent interface, a 1-dimensional (1D) misfit dislocation model and a 2-dimensional (2D) dislocation network model have been studied. For the latter case, our analysis reveals an interface dislocation structure with a network of three sets of parallel dislocations, each with pure-edge character, giving rise to a pattern of coherent and stacking-fault-like regions atmore » the interface. Structural relaxation at elevated temperatures leads to a further change of the dislocation pattern, which can be understood in terms of a competition between the stacking fault energy and the dislocation interaction energy at the interface. In conclusion, our results are expected to serve as an input for the subsequent dislocation dynamics models to understand and predict the macroscopic mechanical behavior of Al/α-Al 2O 3 composite heterostructures.« less
Revisiting the Al/Al 2O 3 Interface: Coherent Interfaces and Misfit Accommodation
Pilania, Ghanshyam; Thijsse, Barend J.; Hoagland, Richard G.; ...
2014-03-27
We report the coherent and semi-coherent Al/α-Al 2O 3 interfaces using molecular dynamics simulations with a mixed, metallic-ionic atomistic model. For the coherent interfaces, both Al-terminated and O-terminated nonstoichiometric interfaces have been studied and their relative stability has been established. To understand the misfit accommodation at the semi-coherent interface, a 1-dimensional (1D) misfit dislocation model and a 2-dimensional (2D) dislocation network model have been studied. For the latter case, our analysis reveals an interface dislocation structure with a network of three sets of parallel dislocations, each with pure-edge character, giving rise to a pattern of coherent and stacking-fault-like regions atmore » the interface. Structural relaxation at elevated temperatures leads to a further change of the dislocation pattern, which can be understood in terms of a competition between the stacking fault energy and the dislocation interaction energy at the interface. In conclusion, our results are expected to serve as an input for the subsequent dislocation dynamics models to understand and predict the macroscopic mechanical behavior of Al/α-Al 2O 3 composite heterostructures.« less
Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.
2007-01-01
The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
Pseudo-transient heat transfer in vertical Bridgman crystal growth of semi-transparent materials
NASA Astrophysics Data System (ADS)
Barvinschi, F.; Nicoara, I.; Santailler, J. L.; Duffar, T.
1998-11-01
The temperature distribution and the solid-liquid interface shape during semi-transparent crystal growth have been studied by modelling a vertical Bridgman technique, using a pseudo-transient approximation in an ideal configuration. The heat transfer equation and the boundary conditions have been solved by the finite-element method. It has been pointed out that the optical absorption coefficients of the liquid and solid phases have a major effect on the thermal field, especially on the shape and location of the crystallization interface.
Batch Proving and Proof Scripting in PVS
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.
2007-01-01
The batch execution modes of PVS are powerful, but highly technical, features of the system that are mostly accessible to expert users. This paper presents a PVS tool, called ProofLite, that extends the theorem prover interface with a batch proving utility and a proof scripting notation. ProofLite enables a semi-literate proving style where specification and proof scripts reside in the same file. The goal of ProofLite is to provide batch proving and proof scripting capabilities to regular, non-expert, users of PVS.
Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models.
Martindale, Christine F; Hoenig, Florian; Strohrmann, Christina; Eskofier, Bjoern M
2017-10-13
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, 'in the wild' data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique.
Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg
2013-01-01
Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces. PMID:24045504
Riederer, Markus; Daiss, Andreas; Gilbert, Norbert; Köhle, Harald
2002-08-01
The behaviour of (semi-)volatile organic compounds at the interface between the leaf surface and the atmosphere was investigated by finite-element numerical simulation. Three model systems with increasing complexity and closeness to the real situation were studied. The three-dimensional model systems were translated into appropriate grid structures and diffusive and convective transport in the leaf/atmosphere interface was simulated. Fenpropimorph (cis-4-[3-(4-tert-butylphenyl)-2-methylpropyl]-2,6-dimethylmorpholine) and Kresoxim-methyl ((E)-methyl-2-methoxyimino-2-[2-(o-tolyloxy-methyl)phenyl] acetate) were used as model compounds. The simulation showed that under still and convective conditions the vapours emitted by a point source rapidly form stationary envelopes around the leaves. Vapour concentrations within these unstirred layers depend on the vapour pressure of the compound in question and on its affinity to the lipoid surface layers of the leaf (cuticular waxes, cutin). The rules deduced from the numerical simulation of organic vapour behaviour in the leaf/atmosphere interface are expected to help in assessing how (semi-)volatile plant products (e.g. hormones, pheromones, secondary metabolites) and xenobiotics (e.g. pesticides, pollutants) perform on plant surfaces.
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
Hollenbeak, Christopher S
2005-10-15
While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.
Water management can reinforce plant competition in salt-affected semi-arid wetlands
NASA Astrophysics Data System (ADS)
Coletti, Janaine Z.; Vogwill, Ryan; Hipsey, Matthew R.
2017-09-01
The diversity of vegetation in semi-arid, ephemeral wetlands is determined by niche availability and species competition, both of which are influenced by changes in water availability and salinity. Here, we hypothesise that ignoring physiological differences and competition between species when managing wetland hydrologic regimes can lead to a decrease in vegetation diversity, even when the overall wetland carrying capacity is improved. Using an ecohydrological model capable of resolving water-vegetation-salt feedbacks, we investigate why water surface and groundwater management interventions to combat vegetation decline have been more beneficial to Casuarina obesa than to Melaleuca strobophylla, the co-dominant tree species in Lake Toolibin, a salt-affected wetland in Western Australia. The simulations reveal that in trying to reduce the negative effect of salinity, the management interventions have created an environment favouring C. obesa by intensifying the climate-induced trend that the wetland has been experiencing of lower water availability and higher root-zone salinity. By testing alternative scenarios, we show that interventions that improve M. strobophylla biomass are possible by promoting hydrologic conditions that are less specific to the niche requirements of C. obesa. Modelling uncertainties were explored via a Markov Chain Monte Carlo (MCMC) algorithm. Overall, the study demonstrates the importance of including species differentiation and competition in ecohydrological models that form the basis for wetland management.
Multiphase Interface Tracking with Fast Semi-Lagrangian Contouring.
Li, Xiaosheng; He, Xiaowei; Liu, Xuehui; Zhang, Jian J; Liu, Baoquan; Wu, Enhua
2016-08-01
We propose a semi-Lagrangian method for multiphase interface tracking. In contrast to previous methods, our method maintains an explicit polygonal mesh, which is reconstructed from an unsigned distance function and an indicator function, to track the interface of arbitrary number of phases. The surface mesh is reconstructed at each step using an efficient multiphase polygonization procedure with precomputed stencils while the distance and indicator function are updated with an accurate semi-Lagrangian path tracing from the meshes of the last step. Furthermore, we provide an adaptive data structure, multiphase distance tree, to accelerate the updating of both the distance function and the indicator function. In addition, the adaptive structure also enables us to contour the distance tree accurately with simple bisection techniques. The major advantage of our method is that it can easily handle topological changes without ambiguities and preserve both the sharp features and the volume well. We will evaluate its efficiency, accuracy and robustness in the results part with several examples.
Variable context Markov chains for HIV protease cleavage site prediction.
Oğul, Hasan
2009-06-01
Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; ...
2017-02-21
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. Here, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by constructionmore » captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. This approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.« less
NASA Astrophysics Data System (ADS)
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; Kronik, Leeor; Neaton, Jeffrey B.
2017-03-01
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. In this work, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by construction captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. Our approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Nonparametric model validations for hidden Markov models with applications in financial econometrics
Zhao, Zhibiao
2011-01-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601
1982-12-31
interfaces which are of importance in such semi- conductor devices as MOSFETS, CCD devices, photovoltaic devices, DD I jAN 73 1473 EDITION OF INOV 66 if...interfaces is interesting for the study of electrolytic cells . Our photoemission study reveals for the first time how the electronic structure of water
Positive zeta potential of a negatively charged semi-permeable plasma membrane
NASA Astrophysics Data System (ADS)
Sinha, Shayandev; Jing, Haoyuan; Das, Siddhartha
2017-08-01
The negative charge of the plasma membrane (PM) severely affects the nature of moieties that may enter or leave the cells and controls a large number of ion-interaction-mediated intracellular and extracellular events. In this letter, we report our discovery of a most fascinating scenario, where one interface (e.g., membrane-cytosol interface) of the negatively charged PM shows a positive surface (or ζ) potential, while the other interface (e.g., membrane-electrolyte interface) still shows a negative ζ potential. Therefore, we encounter a completely unexpected situation where an interface (e.g., membrane-cytosol interface) that has a negative surface charge density demonstrates a positive ζ potential. We establish that the attainment of such a property by the membrane can be ascribed to an interplay of the nature of the membrane semi-permeability and the electrostatics of the electric double layer established on either side of the charged membrane. We anticipate that such a membrane property can lead to such capabilities of the cell (in terms of accepting or releasing certain kinds of moieties as well regulating cellular signaling) that was hitherto inconceivable.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
Takagaki, Naohisa; Kurose, Ryoichi; Kimura, Atsushi; Komori, Satoru
2016-11-14
The mass transfer across a sheared gas-liquid interface strongly depends on the Schmidt number. Here we investigate the relationship between mass transfer coefficient on the liquid side, k L , and Schmidt number, Sc, in the wide range of 0.7 ≤ Sc ≤ 1000. We apply a three-dimensional semi direct numerical simulation (SEMI-DNS), in which the mass transfer is solved based on an approximated deconvolution model (ADM) scheme, to wind-driven turbulence with mass transfer across a sheared wind-driven wavy gas-liquid interface. In order to capture the deforming gas-liquid interface, an arbitrary Lagrangian-Eulerian (ALE) method is employed. Our results show that similar to the case for flat gas-liquid interfaces, k L for the wind-driven wavy gas-liquid interface is generally proportional to Sc -0.5 , and can be roughly estimated by the surface divergence model. This trend is endorsed by the fact that the mass transfer across the gas-liquid interface is controlled mainly by streamwise vortices on the liquid side even for the wind-driven turbulence under the conditions of low wind velocities without wave breaking.
Takagaki, Naohisa; Kurose, Ryoichi; Kimura, Atsushi; Komori, Satoru
2016-01-01
The mass transfer across a sheared gas-liquid interface strongly depends on the Schmidt number. Here we investigate the relationship between mass transfer coefficient on the liquid side, kL, and Schmidt number, Sc, in the wide range of 0.7 ≤ Sc ≤ 1000. We apply a three-dimensional semi direct numerical simulation (SEMI-DNS), in which the mass transfer is solved based on an approximated deconvolution model (ADM) scheme, to wind-driven turbulence with mass transfer across a sheared wind-driven wavy gas-liquid interface. In order to capture the deforming gas-liquid interface, an arbitrary Lagrangian-Eulerian (ALE) method is employed. Our results show that similar to the case for flat gas-liquid interfaces, kL for the wind-driven wavy gas-liquid interface is generally proportional to Sc−0.5, and can be roughly estimated by the surface divergence model. This trend is endorsed by the fact that the mass transfer across the gas-liquid interface is controlled mainly by streamwise vortices on the liquid side even for the wind-driven turbulence under the conditions of low wind velocities without wave breaking. PMID:27841325
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
A high-fidelity weather time series generator using the Markov Chain process on a piecewise level
NASA Astrophysics Data System (ADS)
Hersvik, K.; Endrerud, O.-E. V.
2017-12-01
A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is always rapidly mixing for two general classes of positively biased { pxy}. More significantly, we also prove that the general conjecture is false by exhibiting values for the pxy, with 1/2 ≤ pxy ≤ 1 for all x < y, but for which the transposition chain will require exponential time to converge. Finally, we consider a model of colloids, which are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. This clustering has proved elusive to verify, since all local sampling algorithms are known to be inefficient at high density, and in fact a new nonlocal algorithm was recently shown to require exponential time in some cases. We characterize the high and low density phases for a general family of discrete interfering binary mixtures by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area, very small perimeter, and high density of one type of molecule. Special cases of interfering binary mixtures include the Ising model at fixed magnetization and independent sets.
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Robertson, Colin; Sawford, Kate; Gunawardana, Walimunige S. N.; Nelson, Trisalyn A.; Nathoo, Farouk; Stephen, Craig
2011-01-01
Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines. PMID:21949763
NASA Technical Reports Server (NTRS)
Bassani, J. L.; Erdogan, F.
1979-01-01
The antiplane shear problem for two bonded dissimilar half planes containing a semi-infinite crack or two arbitrarily located collinear cracks is considered. For the semi-infinite crack the problem is solved for a concentrated wedge load and the stress intensity factor and the angular distribution of stresses are calculated. For finite cracks the problem is reduced to a pair of integral equations. Numerical results are obtained for cracks fully imbedded in a homogeneous medium, one crack tip touching the interface, and a crack crossing the interface for various crack angles.
NASA Technical Reports Server (NTRS)
Bassani, J. L.; Erdogan, F.
1978-01-01
The antiplane shear problem for two bonded dissimilar half planes containing a semi-infinite crack or two arbitrarily located collinear cracks was considered. For the semi-infinite crack the problem was solved for a concentrated wedge load and the stress intensity factor and the angular distribution of stresses were calculated. For finite cracks the problem was reduced to a pair of integral equations. Numerical results were obtained for cracks fully imbedded in a homogeneous medium, one crack tip touching the interface, and a crack crossing the interface for various crack angles.
NASA Astrophysics Data System (ADS)
Pindra, Nadjime; Lazarus, Véronique; Leblond, Jean-Baptiste
One studies the evolution in time of the deformation of the front of a semi-infinite 3D interface crack propagating quasistatically in an infinite heterogeneous elastic body. The fracture properties are assumed to be lower on the interface than in the materials so that crack propagation is channelled along the interface, and to vary randomly within the crack plane. The work is based on earlier formulae which provide the first-order change of the stress intensity factors along the front of a semi-infinite interface crack arising from some small but otherwise arbitrary in-plane perturbation of this front. The main object of study is the long-time behavior of various statistical measures of the deformation of the crack front. Special attention is paid to the influences of the mismatch of elastic properties, the type of propagation law (fatigue or brittle fracture) and the stable or unstable character of 2D crack propagation (depending on the loading) upon the development of this deformation.
Bi-material plane with interface crack for the model of semi-linear material
NASA Astrophysics Data System (ADS)
Domanskaya, T. O.; Malkov, V. M.; Malkova, Yu. V.
2018-05-01
The singular plane problems of nonlinear elasticity (plane strain and plane stress) are considered for bi-material infinite plane with interface crack. The plane is formed of two half-planes. Mechanical properties of half-planes are described by the model of semi-linear material. Using model of this harmonic material has allowed to apply the theory of complex functions and to obtain exact analytical global solutions of some nonlinear problems. Among them the problem of bi-material plane with the stresses and strains jumps at an interface is considered. As an application of the problem of jumps, the problem of interface crack is solved. The values of nominal (Piola) and Cauchy stresses and displacements are founded. Based on the global solutions the asymptotic expansions are constructed for stresses and displacements in a vicinity of crack tip. As an example the case of a free crack in bi-material plane subjected to constant stresses at infinity is studied. As a special case, the analytical solution of the problem of a crack in a homogeneous plane is obtained from the problem for bi-material plane with interface crack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Shuai; Wang, Jian
In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less
Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao
2018-01-01
A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work. PMID:25013937
a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods
NASA Astrophysics Data System (ADS)
Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.
2017-09-01
Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.
Mitchell, Christina M.; Beals, Janette; Whitesell, Nancy Rumbaugh
2008-01-01
Objective: We explored patterns of alcohol use among American Indian youths as well as concurrent predictors and developmental outcomes 6 years later. Method: This study used six semi-annual waves of data collected across 3 years from 861 American Indian youths, ages 14-20 initially, from two western tribes. Using a latent Markov model, we examined patterns of change in latent states of adolescent alcohol use in the past 6 months, combining these states of alcohol use into three latent statuses that described patterns of change across the 3 years: abstainers, inconsistent drinkers, and consistent drinkers. We then explored how the latent statuses differed, both initially and in young adulthood (ages 20-26). Results: Both alcohol use and nonuse were quite stable across time, although we also found evidence of change. Despite some rather troubling drinking patterns as teens, especially among consistent drinkers, most of the youths had achieved important tasks of young adulthood. But patterns of use during adolescence were related to greater levels of substance use in young adulthood. Conclusions: Latent Markov modeling provided a useful categorization of alcohol use that more finely differentiated those youths who would otherwise have been considered inconsistent drinkers. Findings also suggest that broad-based interventions during adolescence may not be the most important ones; instead, programs targeting later alcohol and other drug use may be a more strategic use of often limited resources. PMID:18781241
Dependability and performability analysis
NASA Technical Reports Server (NTRS)
Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.
1993-01-01
Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.
Nanoparticle-Seeding Approach to Buried (Semi) Metal Film Growth
2014-05-20
semimetals that can be grown epitaxially on zinc-blende III-V substrates, with thermodynamically stable interfaces. However, the rotational symmetry...epitaxially on zinc-blende III-V substrates, with thermodynamically stable interfaces. However, the rotational symmetry mismatch between the III-V and ErAs
Spatial distribution of GRBs and large scale structure of the Universe
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István
We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.
NASA Astrophysics Data System (ADS)
Shan, Zhendong; Ling, Daosheng
2018-02-01
This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.
On the mobility of carriers at semi-coherent oxide heterointerfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dholabhai, Pratik P.; Martinez, Enrique Saez; Brown, Nicholas Taylor
In the quest to develop new materials with enhanced ionic conductivity for battery and fuel cell applications, nano-structured oxides have attracted attention. Experimental reports indicate that oxide heterointerfaces can lead to enhanced ionic conductivity, but these same reports cannot elucidate the origin of this enhancement, often vaguely referring to pipe diffusion at misfit dislocations as a potential explanation. However, this highlights the need to understand the role of misfit dislocation structure at semi-coherent oxide heterointerfaces in modifying carrier mobilities. Here, we use atomistic and kinetic Monte Carlo (KMC) simulations to develop a model of oxygen vacancy migration at SrTiO 3/MgOmore » interfaces, chosen because the misfit dislocation structure can be modified by changing the termination chemistry. We use atomistic simulations to determine the energetics of oxygen vacancies at both SrO and TiO 2 terminated interfaces, which are then used as the basis of the KMC simulations. While this model is approximate (as revealed by select nudged elastic band calculations), it highlights the role of the misfit dislocation structure in modifying the oxygen vacancy dynamics. We find that oxygen vacancy mobility is significantly reduced at either interface, with slight differences at each interface due to the differing misfit dislocation structure. Here, we conclude that if such semi-coherent oxide heterointerfaces induce enhanced ionic conductivity, it is not a consequence of higher carrier mobility.« less
On the mobility of carriers at semi-coherent oxide heterointerfaces
Dholabhai, Pratik P.; Martinez, Enrique Saez; Brown, Nicholas Taylor; ...
2017-08-17
In the quest to develop new materials with enhanced ionic conductivity for battery and fuel cell applications, nano-structured oxides have attracted attention. Experimental reports indicate that oxide heterointerfaces can lead to enhanced ionic conductivity, but these same reports cannot elucidate the origin of this enhancement, often vaguely referring to pipe diffusion at misfit dislocations as a potential explanation. However, this highlights the need to understand the role of misfit dislocation structure at semi-coherent oxide heterointerfaces in modifying carrier mobilities. Here, we use atomistic and kinetic Monte Carlo (KMC) simulations to develop a model of oxygen vacancy migration at SrTiO 3/MgOmore » interfaces, chosen because the misfit dislocation structure can be modified by changing the termination chemistry. We use atomistic simulations to determine the energetics of oxygen vacancies at both SrO and TiO 2 terminated interfaces, which are then used as the basis of the KMC simulations. While this model is approximate (as revealed by select nudged elastic band calculations), it highlights the role of the misfit dislocation structure in modifying the oxygen vacancy dynamics. We find that oxygen vacancy mobility is significantly reduced at either interface, with slight differences at each interface due to the differing misfit dislocation structure. Here, we conclude that if such semi-coherent oxide heterointerfaces induce enhanced ionic conductivity, it is not a consequence of higher carrier mobility.« less
NASA Astrophysics Data System (ADS)
Feng, Qinggao; Zhan, Hongbin
2015-02-01
A mathematical model for describing groundwater flow to a partially penetrating pumping well of a finite diameter in an anisotropic leaky confined aquifer is developed. The model accounts for the jointed effects of aquitard storage, aquifer anisotropy, and wellbore storage by treating the aquitard leakage as a boundary condition at the aquitard-aquifer interface rather than a volumetric source/sink term in the governing equation, which has never developed before. A new semi-analytical solution for the model is obtained by the Laplace transform in conjunction with separation of variables. Specific attention was paid on the flow across the aquitard-aquifer interface, which is of concern if aquitard and aquifer have different pore water chemistry. Moreover, Laplace-domain and steady-state solutions are obtained to calculate the rate and volume of (total) leakage through the aquitard-aquifer interface due to pump in a partially penetrating well, which is also useful for engineers to manager water resources. The sensitivity analyses for the drawdown illustrate that the drawdown is most sensitive to the well partial penetration. It is apparently sensitive to the aquifer anisotropic ratio over the entire time of pumping. It is moderately sensitive to the aquitard/aquifer specific storage ratio at the intermediate times only. It is moderately sensitive to the aquitard/aquifer vertical hydraulic conductivity ratio and the aquitard/aquifer thickness ratio with the identical influence at late times.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
CGDSNPdb: a database resource for error-checked and imputed mouse SNPs.
Hutchins, Lucie N; Ding, Yueming; Szatkiewicz, Jin P; Von Smith, Randy; Yang, Hyuna; de Villena, Fernando Pardo-Manuel; Churchill, Gary A; Graber, Joel H
2010-07-06
The Center for Genome Dynamics Single Nucleotide Polymorphism Database (CGDSNPdb) is an open-source value-added database with more than nine million mouse single nucleotide polymorphisms (SNPs), drawn from multiple sources, with genotypes assigned to multiple inbred strains of laboratory mice. All SNPs are checked for accuracy and annotated for properties specific to the SNP as well as those implied by changes to overlapping protein-coding genes. CGDSNPdb serves as the primary interface to two unique data sets, the 'imputed genotype resource' in which a Hidden Markov Model was used to assess local haplotypes and the most probable base assignment at several million genomic loci in tens of strains of mice, and the Affymetrix Mouse Diversity Genotyping Array, a high density microarray with over 600,000 SNPs and over 900,000 invariant genomic probes. CGDSNPdb is accessible online through either a web-based query tool or a MySQL public login. Database URL: http://cgd.jax.org/cgdsnpdb/
Multi-Affinity for Growing Rough Interfaces of Bacterial Colonies
NASA Astrophysics Data System (ADS)
Kobayashi, N.; Ozawa, T.; Saito, K.; Yamazaki, Y.; Matsuyama, T.; Matsushita, M.
We have examined whether rough interfaces of bacterial colonies are multi-affine. We have used the bacterial species called textit{Bacillus subtilis}, which has been found to exhibit a variety of colony patterns when varying both the concentration of nutrient and solidity of agar medium. Consequently, we have found that the colony interface on a nutrient-rich, solid agar medium is multi-affine. On the other hand, the colony interface on a nutrient-rich, semi-solid agar medium is self-affine.
2014-01-01
Background Logos are commonly used in molecular biology to provide a compact graphical representation of the conservation pattern of a set of sequences. They render the information contained in sequence alignments or profile hidden Markov models by drawing a stack of letters for each position, where the height of the stack corresponds to the conservation at that position, and the height of each letter within a stack depends on the frequency of that letter at that position. Results We present a new tool and web server, called Skylign, which provides a unified framework for creating logos for both sequence alignments and profile hidden Markov models. In addition to static image files, Skylign creates a novel interactive logo plot for inclusion in web pages. These interactive logos enable scrolling, zooming, and inspection of underlying values. Skylign can avoid sampling bias in sequence alignments by down-weighting redundant sequences and by combining observed counts with informed priors. It also simplifies the representation of gap parameters, and can optionally scale letter heights based on alternate calculations of the conservation of a position. Conclusion Skylign is available as a website, a scriptable web service with a RESTful interface, and as a software package for download. Skylign’s interactive logos are easily incorporated into a web page with just a few lines of HTML markup. Skylign may be found at http://skylign.org. PMID:24410852
Semi-automated high-efficiency reflectivity chamber for vacuum UV measurements
NASA Astrophysics Data System (ADS)
Wiley, James; Fleming, Brian; Renninger, Nicholas; Egan, Arika
2017-08-01
This paper presents the design and theory of operation for a semi-automated reflectivity chamber for ultraviolet optimized optics. A graphical user interface designed in LabVIEW controls the stages, interfaces with the detector system, takes semi-autonomous measurements, and monitors the system in case of error. Samples and an optical photodiode sit on an optics plate mounted to a rotation stage in the middle of the vacuum chamber. The optics plate rotates the samples and diode between an incident and reflected position to measure the absolute reflectivity of the samples at wavelengths limited by the monochromator operational bandpass of 70 nm to 550 nm. A collimating parabolic mirror on a fine steering tip-tilt motor enables beam steering for detector peak-ups. This chamber is designed to take measurements rapidly and with minimal oversight, increasing lab efficiency for high cadence and high accuracy vacuum UV reflectivity measurements.
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Hamilton, Liberty S; Chang, David L; Lee, Morgan B; Chang, Edward F
2017-01-01
In this article, we introduce img_pipe, our open source python package for preprocessing of imaging data for use in intracranial electrocorticography (ECoG) and intracranial stereo-EEG analyses. The process of electrode localization, labeling, and warping for use in ECoG currently varies widely across laboratories, and it is usually performed with custom, lab-specific code. This python package aims to provide a standardized interface for these procedures, as well as code to plot and display results on 3D cortical surface meshes. It gives the user an easy interface to create anatomically labeled electrodes that can also be warped to an atlas brain, starting with only a preoperative T1 MRI scan and a postoperative CT scan. We describe the full capabilities of our imaging pipeline and present a step-by-step protocol for users.
NASA Astrophysics Data System (ADS)
Shofner, Meisha; Lee, Ji Hoon
2012-02-01
Compatible component interfaces in polymer nanocomposites can be used to facilitate a dispersed morphology and improved physical properties as has been shown extensively in experimental results concerning amorphous matrix nanocomposites. In this research, a block copolymer compatibilized interface is employed in a semi-crystalline matrix to prevent large scale nanoparticle clustering and enable microstructure construction with post-processing drawing. The specific materials used are hydroxyapatite nanoparticles coated with a polyethylene oxide-b-polymethacrylic acid block copolymer and a polyethylene oxide matrix. Two particle shapes are used: spherical and needle-shaped. Characterization of the dynamic mechanical properties indicated that the two nanoparticle systems provided similar levels of reinforcement to the matrix. For the needle-shaped nanoparticles, the post-processing step increased matrix crystallinity and changed the thermomechanical reinforcement trends. These results will be used to further refine the post-processing parameters to achieve a nanocomposite microstructure with triangulated arrays of nanoparticles.
Hamilton, Liberty S.; Chang, David L.; Lee, Morgan B.; Chang, Edward F.
2017-01-01
In this article, we introduce img_pipe, our open source python package for preprocessing of imaging data for use in intracranial electrocorticography (ECoG) and intracranial stereo-EEG analyses. The process of electrode localization, labeling, and warping for use in ECoG currently varies widely across laboratories, and it is usually performed with custom, lab-specific code. This python package aims to provide a standardized interface for these procedures, as well as code to plot and display results on 3D cortical surface meshes. It gives the user an easy interface to create anatomically labeled electrodes that can also be warped to an atlas brain, starting with only a preoperative T1 MRI scan and a postoperative CT scan. We describe the full capabilities of our imaging pipeline and present a step-by-step protocol for users. PMID:29163118
Milestoning with coarse memory
NASA Astrophysics Data System (ADS)
Hawk, Alexander T.
2013-04-01
Milestoning is a method used to calculate the kinetics of molecular processes occurring on timescales inaccessible to traditional molecular dynamics (MD) simulations. In the method, the phase space of the system is partitioned by milestones (hypersurfaces), trajectories are initialized on each milestone, and short MD simulations are performed to calculate transitions between neighboring milestones. Long trajectories of the system are then reconstructed with a semi-Markov process from the observed statistics of transition. The procedure is typically justified by the assumption that trajectories lose memory between crossing successive milestones. Here we present Milestoning with Coarse Memory (MCM), a generalization of Milestoning that relaxes the memory loss assumption of conventional Milestoning. In the method, milestones are defined and sample transitions are calculated in the standard Milestoning way. Then, after it is clear where trajectories sample milestones, the milestones are broken up into distinct neighborhoods (clusters), and each sample transition is associated with two clusters: the cluster containing the coordinates the trajectory was initialized in, and the cluster (on the terminal milestone) containing trajectory's final coordinates. Long trajectories of the system are then reconstructed with a semi-Markov process in an extended state space built from milestone and cluster indices. To test the method, we apply it to a process that is particularly ill suited for Milestoning: the dynamics of a polymer confined to a narrow cylinder. We show that Milestoning calculations of both the mean first passage time and the mean transit time of reversal—which occurs when the end-to-end vector reverses direction—are significantly improved when MCM is applied. Finally, we note the overhead of performing MCM on top of conventional Milestoning is negligible.
QuTiP 2: A Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2013-04-01
We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.
Recursive utility in a Markov environment with stochastic growth
Hansen, Lars Peter; Scheinkman, José A.
2012-01-01
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428
Recursive utility in a Markov environment with stochastic growth.
Hansen, Lars Peter; Scheinkman, José A
2012-07-24
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
NASA Astrophysics Data System (ADS)
Mishra, Rinku; Dey, M.
2018-04-01
An analytical model is developed that explains the propagation of a high frequency electrostatic surface wave along the interface of a plasma system where semi-infinite electron-ion plasma is interfaced with semi-infinite dusty plasma. The model emphasizes that the source of such high frequency waves is inherent in the presence of ion acoustic and dust ion acoustic/dust acoustic volume waves in electron-ion plasma and dusty plasma region. Wave dispersion relation is obtained for two distinct cases and the role of plasma parameters on wave dispersion is analyzed in short and long wavelength limits. The normalized surface wave frequency is seen to grow linearly for lower wave number but becomes constant for higher wave numbers in both the cases. It is observed that the normalized frequency depends on ion plasma frequencies when dust oscillation frequency is neglected.
EL2 deep-level transient study in semi-insulating GaAs using positron-lifetime spectroscopy
NASA Astrophysics Data System (ADS)
Shan, Y. Y.; Ling, C. C.; Deng, A. H.; Panda, B. K.; Beling, C. D.; Fung, S.
1997-03-01
Positron lifetime measurements performed on Au/GaAs samples at room temperature with an applied square-wave ac bias show a frequency dependent interface related lifetime intensity that peaks around 0.4 Hz. The observation is explained by the ionization of the deep-donor level EL2 to EL2+ in the GaAs region adjacent to the Au/GaAs interface, causing a transient electric field to be experienced by positrons drifting towards the interface. Without resorting to temperature scanning or any Arrhenius plot the EL2 donor level is found to be located 0.80+/-0.01+/-0.05 eV below the conduction-band minimum, where the first error estimate is statistical and the second systematic. The result suggests positron annihilation may, in some instances, act as an alternative to capacitance transient spectroscopies in characterizing deep levels in both semiconductors and semi-insulators.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard
1987-01-01
A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.
NASA Technical Reports Server (NTRS)
Abiteboul, Serge
1997-01-01
The amount of data of all kinds available electronically has increased dramatically in recent years. The data resides in different forms, ranging from unstructured data in the systems to highly structured in relational database systems. Data is accessible through a variety of interfaces including Web browsers, database query languages, application-specic interfaces, or data exchange formats. Some of this data is raw data, e.g., images or sound. Some of it has structure even if the structure is often implicit, and not as rigid or regular as that found in standard database systems. Sometimes the structure exists but has to be extracted from the data. Sometimes also it exists but we prefer to ignore it for certain purposes such as browsing. We call here semi-structured data this data that is (from a particular viewpoint) neither raw data nor strictly typed, i.e., not table-oriented as in a relational model or sorted-graph as in object databases. As will seen later when the notion of semi-structured data is more precisely de ned, the need for semi-structured data arises naturally in the context of data integration, even when the data sources are themselves well-structured. Although data integration is an old topic, the need to integrate a wider variety of data- formats (e.g., SGML or ASN.1 data) and data found on the Web has brought the topic of semi-structured data to the forefront of research. The main purpose of the paper is to isolate the essential aspects of semi- structured data. We also survey some proposals of models and query languages for semi-structured data. In particular, we consider recent works at Stanford U. and U. Penn on semi-structured data. In both cases, the motivation is found in the integration of heterogeneous data.
Dudek, Aleksandra M.; Martin, Shaun; Garg, Abhishek D.; Agostinis, Patrizia
2013-01-01
Dendritic cells (DCs) are the sentinel antigen-presenting cells of the immune system; such that their productive interface with the dying cancer cells is crucial for proper communication of the “non-self” status of cancer cells to the adaptive immune system. Efficiency and the ultimate success of such a communication hinges upon the maturation status of the DCs, attained following their interaction with cancer cells. Immature DCs facilitate tolerance toward cancer cells (observed for many apoptotic inducers) while fully mature DCs can strongly promote anticancer immunity if they secrete the correct combinations of cytokines [observed when DCs interact with cancer cells undergoing immunogenic cell death (ICD)]. However, an intermediate population of DC maturation, called semi-mature DCs exists, which can potentiate either tolerogenicity or pro-tumorigenic responses (as happens in the case of certain chemotherapeutics and agents exerting ambivalent immune reactions). Specific combinations of DC phenotypic markers, DC-derived cytokines/chemokines, dying cancer cell-derived danger signals, and other less characterized entities (e.g., exosomes) can define the nature and evolution of the DC maturation state. In the present review, we discuss these different maturation states of DCs, how they might be attained and which anticancer agents or cell death modalities (e.g., tolerogenic cell death vs. ICD) may regulate these states. PMID:24376443
ERIC Educational Resources Information Center
Landini, Fernando
2016-01-01
Purpose: In this paper, the knowledge dynamics of the farmer-rural extensionist' interface were explored from extensionists' perspective with the aim of understanding the matchmaking processes between supply and demand of extension services at the micro-level. Design/methodology/approach: Forty semi-structured interviews were conducted with…
NASA Astrophysics Data System (ADS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2015-06-01
Non-linear entropy stability and a summation-by-parts (SBP) framework are used to derive entropy stable interior interface coupling for the semi-discretized three-dimensional (3D) compressible Navier-Stokes equations. A complete semi-discrete entropy estimate for the interior domain is achieved combining a discontinuous entropy conservative operator of any order [1,2] with an entropy stable coupling condition for the inviscid terms, and a local discontinuous Galerkin (LDG) approach with an interior penalty (IP) procedure for the viscous terms. The viscous penalty contributions scale with the inverse of the Reynolds number (Re) so that for Re → ∞ their contributions vanish and only the entropy stable inviscid interface penalty term is recovered. This paper extends the interface couplings presented [1,2] and provides a simple and automatic way to compute the magnitude of the viscous IP term. The approach presented herein is compatible with any diagonal norm summation-by-parts (SBP) spatial operator, including finite element, finite volume, finite difference schemes and the class of high-order accurate methods which include the large family of discontinuous Galerkin discretizations and flux reconstruction schemes.
Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.
Zhao, Liang; Wu, Wei; Corso, Jason J
2013-01-01
Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.
Enhancing gene regulatory network inference through data integration with markov random fields
Banf, Michael; Rhee, Seung Y.
2017-02-01
Here, a gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization schememore » to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.« less
Enhancing gene regulatory network inference through data integration with markov random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banf, Michael; Rhee, Seung Y.
Here, a gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization schememore » to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.« less
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Guillomía San Bartolomé, Miguel A.; Artigas Maestre, José Ignacio; Sánchez Agustín, Ana
2017-01-01
Framed within a long-term cooperation between university and special education teachers, training in alternative communication skills and home control was realized using the “TICO” interface, a communication panel editor extensively used in special education schools. From a technological view we follow AAL technology trends by integrating a successful interface in a heterogeneous services AAL platform, focusing on a functional view. Educationally, a very flexible interface in line with communication training allows dynamic adjustment of complexity, enhanced by an accessible mindset and virtual elements significance already in use, offers specific interaction feedback, adapts to the evolving needs and capacities and improves the personal autonomy and self-confidence of children at school and home. TICO-home-control was installed during the last school year in the library of a special education school to study adaptations and training strategies to enhance the autonomy opportunities of its pupils. The methodology involved a case study and structured and semi-structured observations. Five children, considered unable to use commercial home control systems were trained obtaining good results in enabling them to use an open home control system. Moreover this AAL platform has proved efficient in training children in previous cognitive steps like virtual representation and cause-effect interaction. PMID:29023383
Guillomía San Bartolomé, Miguel A; Falcó Boudet, Jorge L; Artigas Maestre, José Ignacio; Sánchez Agustín, Ana
2017-10-12
Framed within a long-term cooperation between university and special education teachers, training in alternative communication skills and home control was realized using the "TICO" interface, a communication panel editor extensively used in special education schools. From a technological view we follow AAL technology trends by integrating a successful interface in a heterogeneous services AAL platform, focusing on a functional view. Educationally, a very flexible interface in line with communication training allows dynamic adjustment of complexity, enhanced by an accessible mindset and virtual elements significance already in use, offers specific interaction feedback, adapts to the evolving needs and capacities and improves the personal autonomy and self-confidence of children at school and home. TICO-home-control was installed during the last school year in the library of a special education school to study adaptations and training strategies to enhance the autonomy opportunities of its pupils. The methodology involved a case study and structured and semi-structured observations. Five children, considered unable to use commercial home control systems were trained obtaining good results in enabling them to use an open home control system. Moreover this AAL platform has proved efficient in training children in previous cognitive steps like virtual representation and cause-effect interaction.
MARTI: man-machine animation real-time interface
NASA Astrophysics Data System (ADS)
Jones, Christian M.; Dlay, Satnam S.
1997-05-01
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.
Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar
2016-05-01
Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.
Solvent immersion imprint lithography: A high-performance, semi-automated procedure
Liyu, D. A.; Canul, A. J.; Vasdekis, A. E.
2017-01-01
We expand upon our recent, fundamental report on solvent immersion imprint lithography (SIIL) and describe a semi-automated and high-performance procedure for prototyping polymer microfluidics and optofluidics. The SIIL procedure minimizes manual intervention through a cost-effective (∼$200) and easy-to-assemble apparatus. We analyze the procedure's performance specifically for Poly (methyl methacrylate) microsystems and report repeatable polymer imprinting, bonding, and 3D functionalization in less than 5 min, down to 8 μm resolutions and 1:1 aspect ratios. In comparison to commercial approaches, the modified SIIL procedure enables substantial cost reductions, a 100-fold reduction in imprinting force requirements, as well as a more than 10-fold increase in bonding strength. We attribute these advantages to the directed polymer dissolution that strictly localizes at the polymer-solvent interface, as uniquely offered by SIIL. The described procedure opens new desktop prototyping opportunities, particularly for non-expert users performing live-cell imaging, flow-through catalysis, and on-chip gas detection. PMID:28798847
A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owkes, Mark, E-mail: mark.owkes@montana.edu; Desjardins, Olivier
In this work, we present a computational methodology for convection and advection that handles discontinuities with second order accuracy and maintains conservation to machine precision. This method can transport a variety of discontinuous quantities and is used in the context of an incompressible gas–liquid flow to transport the phase interface, momentum, and scalars. The proposed method provides a modification to the three-dimensional, unsplit, second-order semi-Lagrangian flux method of Owkes & Desjardins (JCP, 2014). The modification adds a refined grid that provides consistent fluxes of mass and momentum defined on a staggered grid and discrete conservation of mass and momentum, evenmore » for flows with large density ratios. Additionally, the refined grid doubles the resolution of the interface without significantly increasing the computational cost over previous non-conservative schemes. This is possible due to a novel partitioning of the semi-Lagrangian fluxes into a small number of simplices. The proposed scheme is tested using canonical verification tests, rising bubbles, and an atomizing liquid jet.« less
NASA Astrophysics Data System (ADS)
Boumenou, C. Kameni; Urgessa, Z. N.; Djiokap, S. R. Tankio; Botha, J. R.; Nel, J.
2018-04-01
In this study, cross-sectional surface potential imaging of n+/semi-insulating GaAs junctions is investigated by using amplitude mode kelvin probe force microscopy. The measurements have shown two different potential profiles, related to the difference in surface potential between the semi-insulating (SI) substrate and the epilayers. It is shown that the contact potential difference (CPD) between the tip and the sample is higher on the semi-insulating substrate side than on the n-type epilayer side. This change in CPD across the interface has been explained by means of energy band diagrams indicating the relative Fermi level positions. In addition, it has also been found that the CPD values across the interface are much smaller than the calculated values (on average about 25% of the theoretical values) and increase with the electron density. Therefore, the results presented in study are only in qualitative agreement with the theory.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Satyesh Kumar; Shao, S.; Chen, Youxing
Here, using a newly developed embedded-atom-method potential for Mg–Nb, the semi-coherent Mg/Nb interface with the Kurdjumov–Sachs orientation relationship is studied. Atomistic simulations have been carried out to understand the shear strength of the interface, as well as the interaction between lattice glide dislocations and the interface. The interface shear mechanisms are dependent on the shear loading directions, through either interface sliding between Mg and Nb atomic layers or nucleation and gliding of Shockley partial dislocations in between the first two atomic planes in Mg at the interface. The shear strength for the Mg/Nb interface is found to be generally high,more » in the range of 0.9–1.3 GPa depending on the shear direction. As a consequence, the extents of dislocation core spread into the interface are considerably small, especially when compared to the case of other “weak” interfaces such as the Cu/Nb interface.« less
NASA Astrophysics Data System (ADS)
Tsia, J. M.; Ling, C. C.; Beling, C. D.; Fung, S.
2002-09-01
A plus-or-minus100 V square wave applied to a Au/semi-insulating SI-GaAs interface was used to bring about electron emission from and capture into deep level defects in the region adjacent to the interface. The electric field transient resulting from deep level emission was studied by monitoring the positron drift velocity in the region. A deep level transient spectrum was obtained by computing the trap emission rate as a function of temperature and two peaks corresponding to EL2 (Ea=0.81plus-or-minus0.15 eV) and EL6 (Ea=0.30plus-or-minus0.12 eV) have been identified.
Zero-state Markov switching count-data models: an empirical assessment.
Malyshkina, Nataliya V; Mannering, Fred L
2010-01-01
In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Fast Solvers for Moving Material Interfaces
2008-01-01
interface method—with the semi-Lagrangian contouring method developed in References [16–20]. We are now finalizing portable C / C ++ codes for fast adaptive ...stepping scheme couples a CIR predictor with a trapezoidal corrector using the velocity evaluated from the CIR approximation. It combines the...formula with efficient geometric algorithms and fast accurate contouring techniques. A modular adaptive implementation with fast new geometry modules
Stability of a chemically active floating disk
NASA Astrophysics Data System (ADS)
Vandadi, Vahid; Jafari Kang, Saeed; Rothstein, Jonathan; Masoud, Hassan
2017-11-01
We theoretically study the translational stability of a chemically active disk located at a flat liquid-gas interface. The initially immobile circular disk uniformly releases an interface-active agent that locally changes the surface tension and is insoluble in the bulk. If left unperturbed, the stationary disk remains motionless as the agent is discharged. Neglecting the inertial effects, we numerically test whether a perturbation in the translational velocity of the disk can lead to its spontaneous and self-sustained motion. Such a perturbation gives rise to an asymmetric distribution of the released factor that could trigger and sustain the Marangoni propulsion of the disk. An implicit Fourier-Chebyshev spectral method is employed to solve the advection-diffusion equation for the concentration of the active agent. The solution, given a linear equation of state for the surface tension, provides the shear stress distribution at the interface. This and the no-slip condition on the wetted surface of the disk are then used at each time step to semi-analytically determine the Stokes flow in the semi-infinite liquid layer. Overall, the findings of our investigation pave the way for pinpointing the conditions under which interface-bound active particles become dynamically unstable.
NASA Astrophysics Data System (ADS)
Ling, C. C.; Shek, Y. F.; Huang, A. P.; Fung, S.; Beling, C. D.
1999-02-01
Positron-lifetime spectroscopy has been used to investigate the electric-field distribution occurring at the Au-semi-insulating GaAs interface. Positrons implanted from a 22Na source and drifted back to the interface are detected through their characteristic lifetime at interface traps. The relative intensity of this fraction of interface-trapped positrons reveals that the field strength in the depletion region saturates at applied biases above 50 V, an observation that cannot be reconciled with a simple depletion approximation model. The data, are, however, shown to be fully consistent with recent direct electric-field measurements and the theoretical model proposed by McGregor et al. [J. Appl. Phys. 75, 7910 (1994)] of an enhanced EL2+ electron-capture cross section above a critical electric field that causes a dramatic reduction of the depletion region's net charge density. Two theoretically derived electric field profiles, together with an experimentally based profile, are used to estimate a positron mobility of ~95+/-35 cm2 V-1 s-1 under the saturation field. This value is higher than previous experiments would suggest, and reasons for this effect are discussed.
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
Predicting the High Redshift Galaxy Population for JWST
NASA Astrophysics Data System (ADS)
Flynn, Zoey; Benson, Andrew
2017-01-01
The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.
An intelligent agent for optimal river-reservoir system management
NASA Astrophysics Data System (ADS)
Rieker, Jeffrey D.; Labadie, John W.
2012-09-01
A generalized software package is presented for developing an intelligent agent for stochastic optimization of complex river-reservoir system management and operations. Reinforcement learning is an approach to artificial intelligence for developing a decision-making agent that learns the best operational policies without the need for explicit probabilistic models of hydrologic system behavior. The agent learns these strategies experientially in a Markov decision process through observational interaction with the environment and simulation of the river-reservoir system using well-calibrated models. The graphical user interface for the reinforcement learning process controller includes numerous learning method options and dynamic displays for visualizing the adaptive behavior of the agent. As a case study, the generalized reinforcement learning software is applied to developing an intelligent agent for optimal management of water stored in the Truckee river-reservoir system of California and Nevada for the purpose of streamflow augmentation for water quality enhancement. The intelligent agent successfully learns long-term reservoir operational policies that specifically focus on mitigating water temperature extremes during persistent drought periods that jeopardize the survival of threatened and endangered fish species.
Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu
2015-05-01
Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Yadav, Satyesh Kumar; Shao, S.; Chen, Youxing; ...
2017-10-17
Here, using a newly developed embedded-atom-method potential for Mg–Nb, the semi-coherent Mg/Nb interface with the Kurdjumov–Sachs orientation relationship is studied. Atomistic simulations have been carried out to understand the shear strength of the interface, as well as the interaction between lattice glide dislocations and the interface. The interface shear mechanisms are dependent on the shear loading directions, through either interface sliding between Mg and Nb atomic layers or nucleation and gliding of Shockley partial dislocations in between the first two atomic planes in Mg at the interface. The shear strength for the Mg/Nb interface is found to be generally high,more » in the range of 0.9–1.3 GPa depending on the shear direction. As a consequence, the extents of dislocation core spread into the interface are considerably small, especially when compared to the case of other “weak” interfaces such as the Cu/Nb interface.« less
A New Approach in Force-Limited Vibration Testing of Flight Hardware
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Kern, Dennis L.
2012-01-01
The force-limited vibration test approaches discussed in NASA-7004C were developed to reduce overtesting associated with base shake vibration tests of aerospace hardware where the interface responses are excited coherently. This handbook outlines several different methods of specifying the force limits. The rationale for force limiting is based on the disparity between the impedances of typical aerospace mounting structures and the large impedances of vibration test shakers when the interfaces in general are coherently excited. Among these approaches, the semi-empirical method is presently the most widely used method to derive the force limits. The inclusion of the incoherent excitation of the aerospace structures at mounting interfaces has not been accounted for in the past and provides the basis for more realistic force limits for qualifying the hardware using shaker testing. In this paper current methods for defining the force limiting specifications discussed in the NASA handbook are reviewed using data from a series of acoustic and vibration tests. A new approach based on considering the incoherent excitation of the structural mounting interfaces using acoustic test data is also discussed. It is believed that the new approach provides much more realistic force limits that may further remove conservatism inherent in shaker vibration testing not accounted for by methods discussed in the NASA handbook. A discussion on using FEM/BEM analysis to obtain realistic force limits for flight hardware is provided.
Chen, Shi; Zhang, Yinhong; Lin, Shuyu; Fu, Zhiqiang
2014-02-01
The electromechanical coupling coefficient of Rayleigh-type surface acoustic waves in semi-infinite piezoelectrics/non-piezoelectrics superlattices is investigated by the transfer matrix method. Research results show the high electromechanical coupling coefficient can be obtained in these systems. The optimization design of it is also discussed fully. It is significantly influenced by electrical boundary conditions on interfaces, thickness ratios of piezoelectric and non-piezoelectric layers, and material parameters (such as velocities of pure longitudinal and transversal bulk waves in non-piezoelectric layers). In order to obtain higher electromechanical coupling coefficient, shorted interfaces, non-piezoelectric materials with large velocities of longitudinal and transversal bulk waves, and proper thickness ratios should be chosen. Copyright © 2013 Elsevier B.V. All rights reserved.
Linear Rayleigh-Taylor instability in an accelerated Newtonian fluid with finite width
NASA Astrophysics Data System (ADS)
Piriz, S. A.; Piriz, A. R.; Tahir, N. A.
2018-04-01
The linear theory of Rayleigh-Taylor instability is developed for the case of a viscous fluid layer accelerated by a semi-infinite viscous fluid, considering that the top interface is a free surface. Effects of the surface tensions at both interfaces are taken into account. When viscous effects dominate on surface tensions, an interplay of two mechanisms determines opposite behaviors of the instability growth rate with the thickness of the heavy layer for an Atwood number AT=1 and for sufficiently small values of AT. In the former case, viscosity is a less effective stabilizing mechanism for the thinnest layers. However, the finite thickness of the heavy layer enhances its viscous effects that, in general, prevail on the viscous effects of the semi-infinite medium.
Gate tuneable beamsplitter in ballistic graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rickhaus, Peter; Makk, Péter, E-mail: Peter.Makk@unibas.ch; Schönenberger, Christian
2015-12-21
We present a beam splitter in a suspended, ballistic, multiterminal, bilayer graphene device. By using local bottomgates, a p-n interface tilted with respect to the current direction can be formed. We show that the p-n interface acts as a semi-transparent mirror in the bipolar regime and that the reflectance and transmittance of the p-n interface can be tuned by the gate voltages. Moreover, by studying the conductance features appearing in magnetic field, we demonstrate that the position of the p-n interface can be moved by 1 μm. The herein presented beamsplitter device can form the basis of electron-optic interferometers in graphene.
A coupled duration-focused architecture for real-time music-to-score alignment.
Cont, Arshia
2010-06-01
The capacity for real-time synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a real-time music-to-score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in real time within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the real-time context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov/semi-Markov framework, where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both real-time alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results.
Tumor propagation model using generalized hidden Markov model
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dustin
2017-02-01
Tumor tracking and progression analysis using medical images is a crucial task for physicians to provide accurate and efficient treatment plans, and monitor treatment response. Tumor progression is tracked by manual measurement of tumor growth performed by radiologists. Several methods have been proposed to automate these measurements with segmentation, but many current algorithms are confounded by attached organs and vessels. To address this problem, we present a new generalized tumor propagation model considering time-series prior images and local anatomical features using a Hierarchical Hidden Markov model (HMM) for tumor tracking. First, we apply the multi-atlas segmentation technique to identify organs/sub-organs using pre-labeled atlases. Second, we apply a semi-automatic direct 3D segmentation method to label the initial boundary between the lesion and neighboring structures. Third, we detect vessels in the ROI surrounding the lesion. Finally, we apply the propagation model with the labeled organs and vessels to accurately segment and measure the target lesion. The algorithm has been designed in a general way to be applicable to various body parts and modalities. In this paper, we evaluate the proposed algorithm on lung and lung nodule segmentation and tracking. We report the algorithm's performance by comparing the longest diameter and nodule volumes using the FDA lung Phantom data and a clinical dataset.
Online Statistical Modeling (Regression Analysis) for Independent Responses
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus
2017-06-01
Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.
Haase, Anton; Soltwisch, Victor; Braun, Stefan; Laubis, Christian; Scholze, Frank
2017-06-26
We investigate the influence of the Mo-layer thickness on the EUV reflectance of Mo/Si mirrors with a set of unpolished and interface-polished Mo/Si/C multilayer mirrors. The Mo-layer thickness is varied in the range from 1.7 nm to 3.05 nm. We use a novel combination of specular and diffuse intensity measurements to determine the interface roughness throughout the multilayer stack and do not rely on scanning probe measurements at the surface only. The combination of EUV and X-ray reflectivity measurements and near-normal incidence EUV diffuse scattering allows to reconstruct the Mo layer thicknesses and to determine the interface roughness power spectral density. The data analysis is conducted by applying a matrix method for the specular reflection and the distorted-wave Born approximation for diffuse scattering. We introduce the Markov-chain Monte Carlo method into the field in order to determine the respective confidence intervals for all reconstructed parameters. We unambiguously detect a threshold thickness for Mo in both sample sets where the specular reflectance goes through a local minimum correlated with a distinct increase in diffuse scatter. We attribute that to the known appearance of an amorphous-to-crystallization transition at a certain thickness threshold which is altered in our sample system by the polishing.
NASA Astrophysics Data System (ADS)
Ahdika, Atina; Lusiyana, Novyan
2017-02-01
World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pigozzi, Giancarlo; Janczak-Rusch, Jolanta; Passerone, Daniele
2012-10-29
Nano-sized Ag-Cu{sub 8nm}/AlN{sub 10nm} multilayers were deposited by reactive DC sputtering on {alpha}-Al{sub 2}O{sub 3}(0001) substrates. Investigation of the phase constitution and interface structure of the multilayers evidences a phase separation of the alloy sublayers into nanosized grains of Ag and Cu. The interfaces between the Ag grains and the quasi-single-crystalline AlN sublayers are semi-coherent, whereas the corresponding Cu/AlN interfaces are incoherent. The orientation relationship between Ag and AlN is constant throughout the entire multilayer stack. These observations are consistent with atomistic models of the interfaces as obtained by ab initio calculations.
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-01-01
Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes. PMID:17147790
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-12-05
Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes.
Novel semi-dry electrodes for brain-computer interface applications
NASA Astrophysics Data System (ADS)
Wang, Fei; Li, Guangli; Chen, Jingjing; Duan, Yanwen; Zhang, Dan
2016-08-01
Objectives. Modern applications of brain-computer interfaces (BCIs) based on electroencephalography rely heavily on the so-called wet electrodes (e.g. Ag/AgCl electrodes) which require gel application and skin preparation to operate properly. Recently, alternative ‘dry’ electrodes have been developed to increase ease of use, but they often suffer from higher electrode-skin impedance and signal instability. In the current paper, we have proposed a novel porous ceramic-based ‘semi-dry’ electrode. The key feature of the semi-dry electrodes is that their tips can slowly and continuously release a tiny amount of electrolyte liquid to the scalp, which provides an ionic conducting path for detecting neural signals. Approach. The performance of the proposed electrode was evaluated by simultaneous recording of the wet and semi-dry electrodes pairs in five classical BCI paradigms: eyes open/closed, the motor imagery BCI, the P300 speller, the N200 speller and the steady-state visually evoked potential-based BCI. Main results. The grand-averaged temporal cross-correlation was 0.95 ± 0.07 across the subjects and the nine recording positions, and these cross-correlations were stable throughout the whole experimental protocol. In the spectral domain, the semi-dry/wet coherence was greater than 0.80 at all frequencies and greater than 0.90 at frequencies above 10 Hz, with the exception of a dip around 50 Hz (i.e. the powerline noise). More importantly, the BCI classification accuracies were also comparable between the two types of electrodes. Significance. Overall, these results indicate that the proposed semi-dry electrode can effectively capture the electrophysiological responses and is a feasible alternative to the conventional dry electrode in BCI applications.
Novel semi-dry electrodes for brain-computer interface applications.
Wang, Fei; Li, Guangli; Chen, Jingjing; Duan, Yanwen; Zhang, Dan
2016-08-01
Modern applications of brain-computer interfaces (BCIs) based on electroencephalography rely heavily on the so-called wet electrodes (e.g. Ag/AgCl electrodes) which require gel application and skin preparation to operate properly. Recently, alternative 'dry' electrodes have been developed to increase ease of use, but they often suffer from higher electrode-skin impedance and signal instability. In the current paper, we have proposed a novel porous ceramic-based 'semi-dry' electrode. The key feature of the semi-dry electrodes is that their tips can slowly and continuously release a tiny amount of electrolyte liquid to the scalp, which provides an ionic conducting path for detecting neural signals. The performance of the proposed electrode was evaluated by simultaneous recording of the wet and semi-dry electrodes pairs in five classical BCI paradigms: eyes open/closed, the motor imagery BCI, the P300 speller, the N200 speller and the steady-state visually evoked potential-based BCI. The grand-averaged temporal cross-correlation was 0.95 ± 0.07 across the subjects and the nine recording positions, and these cross-correlations were stable throughout the whole experimental protocol. In the spectral domain, the semi-dry/wet coherence was greater than 0.80 at all frequencies and greater than 0.90 at frequencies above 10 Hz, with the exception of a dip around 50 Hz (i.e. the powerline noise). More importantly, the BCI classification accuracies were also comparable between the two types of electrodes. Overall, these results indicate that the proposed semi-dry electrode can effectively capture the electrophysiological responses and is a feasible alternative to the conventional dry electrode in BCI applications.
2008-03-01
operator, can be operated autonomously or remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload. Ballistic or semi ...states that vehicles should be recoverable, and that ballistic or semi - ballistic vehicles, cruise missiles, and artillery projectiles are not considered...2007-2032. 32 Nicola Tesla and his telautomatons (robots); Tesla further demonstrated remote control of objects by wireless in an exhibition in 1898
Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello
2018-04-22
A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Radiation hardening of metal-oxide semi-conductor (MOS) devices by boron
NASA Technical Reports Server (NTRS)
Danchenko, V.
1974-01-01
Technique using boron effectively protects metal-oxide semiconductor devices from ionizing radiation without using shielding materials. Boron is introduced into insulating gate oxide layer at semiconductor-insulator interface.
Deployment of Shaped Charges by a Semi-Autonomous Ground Vehicle
2007-06-01
lives on a daily basis. BigFoot seeks to replace the local human component by deploying and remotely detonating shaped charges to destroy IEDs...robotic arm to deploy and remotely detonate shaped charges. BigFoot incorporates improved communication range over previous Autonomous Ground Vehicles...and an updated user interface that includes controls for the arm and camera by interfacing multiple microprocessors. BigFoot is capable of avoiding
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
2007-09-01
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.
Tropical geometry of statistical models.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.
RSAT: regulatory sequence analysis tools.
Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques
2008-07-01
The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.
Extracting duration information in a picture category decoding task using hidden Markov Models
NASA Astrophysics Data System (ADS)
Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg
2016-04-01
Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.
Krzywiecki, Maciej; Grządziel, Lucyna; Powroźnik, Paulina; Kwoka, Monika; Rechmann, Julian; Erbe, Andreas
2018-06-13
Reduced tin dioxide/copper phthalocyanine (SnOx/CuPc) heterojunctions recently gained much attention in hybrid electronics due to their defect structure, allowing tuning of the electronic properties at the interface towards particular needs. In this work, we focus on the creation and analysis of the interface between the oxide and organic layer. The inorganic/organic heterojunction was created by depositing CuPc on SnOx layers prepared with the rheotaxial growth and vacuum oxidation (RGVO) method. Exploiting surface sensitive photoelectron spectroscopy techniques, angle dependent X-ray and UV photoelectron spectroscopy (ADXPS and UPS, respectively), supported by semi-empirical simulations, the role of carbon from adventitious organic adsorbates directly at the SnOx/CuPc interface was investigated. The adventitious organic adsorbates were blocking electronic interactions between the environment and surface, hence pinning energy levels. A significant interface dipole of 0.4 eV was detected, compensating for the difference in work functions of the materials in contact, however, without full alignment of the energy levels. From the ADXPS and UPS results, a detailed diagram of the interfacial electronic structure was constructed, giving insight into how to tailor SnOx/CuPc heterojunctions towards specific applications. On the one hand, parasitic surface contamination could be utilized in technology for passivation-like processes. On the other hand, if one needs to keep the oxide's surficial interactions fully accessible, like in the case of stacked electronic systems or gas sensor applications, carbon contamination must be carefully avoided at each processing step.
Global dynamics of a stochastic neuronal oscillator
NASA Astrophysics Data System (ADS)
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Global dynamics of a stochastic neuronal oscillator.
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Controlling interface oxygen for forming Ag ohmic contact to semi-polar (1 1 -2 2) plane p-type GaN
NASA Astrophysics Data System (ADS)
Park, Jae-Seong; Han, Jaecheon; Seong, Tae-Yeon
2014-11-01
Low-resistance Ag ohmic contacts to semi-polar (1 1 -2 2) p-GaN were developed by controlling interfacial oxide using a Zn layer. The 300 °C-annealed Zn/Ag samples showed ohmic behavior with a contact resistivity of 6.0 × 10-4 Ω cm2 better than that of Ag-only contacts (1.0 × 10-3 Ω cm2). The X-ray photoemission spectroscopy (XPS) results showed that annealing caused the indiffusion of oxygen at the contact/GaN interface, resulting in the formation of different types of interfacial oxides, viz. Ga-oxide and Ga-doped ZnO. Based on the XPS and electrical results, the possible mechanisms underlying the improved electrical properties of the Zn/Ag samples are discussed.
Discontinuous model with semi analytical sheath interface for radio frequency plasma
NASA Astrophysics Data System (ADS)
Miyashita, Masaru
2016-09-01
Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.
Nonlinear Markov Control Processes and Games
2012-11-15
the analysis of a new class of stochastic games , nonlinear Markov games , as they arise as a ( competitive ) controlled version of nonlinear Markov... competitive interests) a nonlinear Markov game that we are investigating. I 0. :::tUt::JJt:.l.. I I t:t11VI;:, nonlinear Markov game , nonlinear Markov...corresponding stochastic game Γ+(T, h). In a slightly different setting one can assume that changes in a competitive control process occur as a
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Gender in Science and Engineering Faculties: Demographic Inertia Revisited.
Thomas, Nicole R; Poole, Daniel J; Herbers, Joan M
2015-01-01
The under-representation of women on faculties of science and engineering is ascribed in part to demographic inertia, which is the lag between retirement of current faculty and future hires. The assumption of demographic inertia implies that, given enough time, gender parity will be achieved. We examine that assumption via a semi-Markov model to predict the future faculty, with simulations that predict the convergence demographic state. Our model shows that existing practices that produce gender gaps in recruitment, retention, and career progression preclude eventual gender parity. Further, we examine sensitivity of the convergence state to current gender gaps to show that all sources of disparity across the entire faculty career must be erased to produce parity: we cannot blame demographic inertia.
On the theory of Carriers's Electrostatic Interaction near an Interface
NASA Astrophysics Data System (ADS)
Waters, Michael; Hashemi, Hossein; Kieffer, John
2015-03-01
Heterojunction interfaces are common in non-traditional photovoltaic device designs, such as those based small molecules, polymers, and perovskites. We have examined a number of the effects of the heterojunction interface region on carrier/exciton energetics using a mixture of both semi-classical and quantum electrostatic methods, ab initio methods, and statistical mechanics. Our theoretical analysis has yielded several useful relationships and numerical recipes that should be considered in device design regardless of the particular materials system. As a demonstration, we highlight these formalisms as applied to carriers and polaron pairs near a C60/subphthalocyanine interface. On the regularly ordered areas of the heterojunction, the effect of the interface is a significant set of corrections to the carrier energies, which in turn directly affects device performance.
Camproux, A C; Tufféry, P
2005-08-05
Understanding and predicting protein structures depend on the complexity and the accuracy of the models used to represent them. We have recently set up a Hidden Markov Model to optimally compress protein three-dimensional conformations into a one-dimensional series of letters of a structural alphabet. Such a model learns simultaneously the shape of representative structural letters describing the local conformation and the logic of their connections, i.e. the transition matrix between the letters. Here, we move one step further and report some evidence that such a model of protein local architecture also captures some accurate amino acid features. All the letters have specific and distinct amino acid distributions. Moreover, we show that words of amino acids can have significant propensities for some letters. Perspectives point towards the prediction of the series of letters describing the structure of a protein from its amino acid sequence.
Nomura, Kouji; Nakaji-Hirabayashi, Tadashi; Gemmei-Ide, Makoto; Kitano, Hiromi; Noguchi, Hidenori; Uosaki, Kohei
2014-09-01
Surfaces of both a cover glass and the flat plane of a semi-cylindrical quartz prism were modified with a mixture of positively and negatively charged silane coupling reagents (3-aminopropyltriethoxysilane (APTES) and 3-(trihydroxysilyl)propylmethylphosphonate (THPMP), respectively). The glass surface modified with a self-assembled monolayer (SAM) prepared at a mixing ratio of APTES:THPMP=4:6 was electrically almost neutral and was resistant to non-specific adsorption of proteins, whereas fibroblasts gradually adhered to an amphoteric (mixed) SAM surface probably due to its stiffness, though the number of adhered cells was relatively small. Sum frequency generation (SFG) spectra indicated that total intensity of the OH stretching region (3000-3600cm(-1)) for the amphoteric SAM-modified quartz immersed in liquid water was smaller than those for the positively and negatively charged SAM-modified quartz prisms and a bare quartz prism in contact with liquid water. These results suggested that water molecules at the interface of water and an amphoteric SAM-modified quartz prism are not strongly oriented in comparison with those at the interface of a lopsidedly charged SAM-modified quartz prism and bare quartz. The importance of charge neutralization for the anti-biofouling properties of solid materials was strongly suggested. Copyright © 2014 Elsevier B.V. All rights reserved.
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
Low-frequency surface waves on semi-bounded magnetized quantum plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir
2016-08-15
The propagation of low-frequency electrostatic surface waves on the interface between a vacuum and an electron-ion quantum plasma is studied in the direction perpendicular to an external static magnetic field which is parallel to the interface. A new dispersion equation is derived by employing both the quantum magnetohydrodynamic and Poisson equations. It is shown that the dispersion equations for forward and backward-going surface waves are different from each other.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
NASA Technical Reports Server (NTRS)
Solloway, C. B.; Wakeland, W.
1976-01-01
First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.
Le Kim, Trang Huyen; Jun, Hwiseok; Nam, Yoon Sung
2017-10-01
Polymer emulsifiers solidified at the interface between oil and water can provide exceptional dispersion stability to emulsions due to the formation of unique semi-solid interphase. Our recent works showed that the structural stability of paraffin-in-water emulsions highly depends on the oil wettability of hydrophobic block of methoxy poly(ethylene glycol)-block-poly(ε-caprolactone) (mPEG-b-PCL). Here we investigate the effects of the crystallinity of hydrophobic block of triblock copolymer-based emulsifiers, PCLL-b-PEG-b-PCLL, on the colloidal properties of silicone oil-in-water nanoemulsions. The increased ratio of l-lactide to ε-caprolactone decreases the crystallinity of the hydrophobic block, which in turn reduces the droplet size of silicone oil nanoemulsions due to the increased chain mobility at the interface. All of the prepared nanoemulsions are very stable for a month at 37°C. However, the exposure to repeated freeze-thaw cycles quickly destabilizes the nanoemulsions prepared using the polymer with the reduced crystallinity. This work demonstrates that the anchoring chain crystallization in the semi-solid interphase is critically important for the structural robustness of nanoemulsions under harsh physical stresses. Copyright © 2017 Elsevier Inc. All rights reserved.
High Pulsed Power, Self Excited Magnetohydrodynamic Power Generation Systems
1985-12-27
MHD GENERATOR OUTPUT, CASE G-2 86 TABLE 25:TEMPERATURE IN A SEMI -INFINITE COPPER SLAB EXPOSED TO GAS AT t=O 89 TABLE 26:TIME FOR GAS-Cu INTERFACE TO...REACH 2000 0 F, & BACK SURFACE TEMPERATURE AT THIS TIME,FOR A SEMI -INFINITE SLAB OF GIVEN THICKNESS,d. 89 TABLE 27: CONVECTIVE HEATING OF THE MHD...magnetic field for the explosive MHD generator. A dc room temperature magnet requires too much pow- er for operation at the 5 Tesla fields required by
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
Kappel, David; Nessler, Bernhard; Maass, Wolfgang
2014-01-01
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787
Modeling of cryoseismicity observed at the Fimbulisen Ice Shelf, East Antarctica
NASA Astrophysics Data System (ADS)
Hainzl, S.; Pirli, M.; Dahm, T.; Schweitzer, J.; Köhler, A.
2017-12-01
A source region of repetitive cryoseismic activity has been identified at the Fimbulisen ice shelf, in Dronning Maud Land, East Antarctica. The specific area is located at the outlet of the Jutulstraumen glacier, near the Kupol Moskovskij ice rise. A unique event catalog extending over 13 years, from 2003 to 2016 has been built based on waveform cross-correlation detectors and Hidden Markov Model classifiers. Phases of low seismicity rates are alternating with intense activity intervals that exhibit a strong tidal modulation. We performed a detailed analysis and modeling of the more than 2000 events recorded between July and October 2013. The observations are characterized by a number of very clear signals: (i) the event rate follows both the neap-spring and the semi-diurnal ocean-tide cycle; (ii) recurrences have a characteristic time of approximately 8 minutes; (iii) magnitudes vary systematically both on short and long time scales; and (iv) the events migrate within short-time clusters. We use these observations to constrain the dynamic processes at work at this particular region of the Fimbulisen ice shelf. Our model assumes a local grounding of the ice shelf, where stick-slip motion occurs. We show that the observations can be reproduced considering the modulation of the Coulomb-Failure stress by ocean tides.
Markov stochasticity coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Concept Study on a Flexible Standard Bus for Small Scientific Satellites
NASA Astrophysics Data System (ADS)
Fukuda, Seisuke; Sawai, Shujiro; Sakai, Shin-Ichiro; Saito, Hirobumi; Tohma, Takayuki; Takahashi, Junko; Toriumi, Tsuyoshi; Kitade, Kenji
In this paper, a new standard bus system for a series of small scientific satellites in the Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA) is described. Since each mission proposed for the series has a wide variety of requirements, a lot of efforts are needed to enhance flexibility of the standard bus. Some concepts from different viewpoints are proposed. First, standardization layers concerning satellite configuration, instruments, interfaces, and design methods are defined respectively. Methods of product platform engineering, which classify specifications of the bus system into a core platform, alternative variants, and selectable variants, are also investigated in order to realize a semi-custom-made bus. Furthermore, a tradeoff between integration and modularization architecture is fully considered.
Design and implementation of GaAs HBT circuits with ACME
NASA Technical Reports Server (NTRS)
Hutchings, Brad L.; Carter, Tony M.
1993-01-01
GaAs HBT circuits offer high performance (5-20 GHz) and radiation hardness (500 Mrad) that is attractive for space applications. ACME is a CAD tool specifically developed for HBT circuits. ACME implements a novel physical schematic-capture design technique where designers simultaneously view the structure and physical organization of a circuit. ACME's design interface is similar to schematic capture; however, unlike conventional schematic capture, designers can directly control the physical placement of both function and interconnect at the schematic level. In addition, ACME provides design-time parasitic extraction, complex wire models, and extensions to Multi-Chip Modules (MCM's). A GaAs HBT gate-array and semi-custom circuits have been developed with ACME; several circuits have been fabricated and found to be fully functional .
An Intention-Driven Semi-autonomous Intelligent Robotic System for Drinking.
Zhang, Zhijun; Huang, Yongqian; Chen, Siyuan; Qu, Jun; Pan, Xin; Yu, Tianyou; Li, Yuanqing
2017-01-01
In this study, an intention-driven semi-autonomous intelligent robotic (ID-SIR) system is designed and developed to assist the severely disabled patients to live independently. The system mainly consists of a non-invasive brain-machine interface (BMI) subsystem, a robot manipulator and a visual detection and localization subsystem. Different from most of the existing systems remotely controlled by joystick, head- or eye tracking, the proposed ID-SIR system directly acquires the intention from users' brain. Compared with the state-of-art system only working for a specific object in a fixed place, the designed ID-SIR system can grasp any desired object in a random place chosen by a user and deliver it to his/her mouth automatically. As one of the main advantages of the ID-SIR system, the patient is only required to send one intention command for one drinking task and the autonomous robot would finish the rest of specific controlling tasks, which greatly eases the burden on patients. Eight healthy subjects attended our experiment, which contained 10 tasks for each subject. In each task, the proposed ID-SIR system delivered the desired beverage container to the mouth of the subject and then put it back to the original position. The mean accuracy of the eight subjects was 97.5%, which demonstrated the effectiveness of the ID-SIR system.
NASA Astrophysics Data System (ADS)
Sun, Tianyi; Guo, Chuanfei; Kempa, Krzysztof; Ren, Zhifeng
2014-03-01
A Fabry-Perot reflection filter, consisting of semi-transparent metal and dielectric layers on opaque metals, is featured by selective absorption determined by the phase difference of waves from the two interfaces. In such systems, semi-transparency is usually realized by layers of reflective metals thinner than the penetration depth of the light. Here we present a filter cavity with entry windows not made of traditional thin layers, but of aperiodic metallic random nanomeshes thicker than the penetration depth, fabricated by grain boundary lithography. It is shown that due to the deteriorated phase caused by the interface between the random nanomesh and the dielectric layer, the width and location of the resonances can be tuned by metallic coverage. Further experiments show that this phenomenon can be used in designing aperiodic plasmonic metamaterial structures for visible and infrared applications.
NASA Astrophysics Data System (ADS)
Wang, Xu; Schiavone, Peter
2018-02-01
We consider an Eshelby inclusion of arbitrary shape with uniform anti-plane eigenstrains embedded in one of two bonded dissimilar anisotropic half planes containing a semi-infinite interface crack situated along the negative real axis. Using two consecutive conformal mappings, the upper and lower halves of the physical plane are first mapped onto two separate quarters of the image plane. The corresponding boundary value problem is then analyzed in this image plane rather than in the original physical plane. Corresponding analytic functions in all three phases of the composite are derived via the construction of an auxiliary function and repeated application of analytic continuation across the real and imaginary axes in the image plane. As a result, the local stress intensity factor is then obtained explicitly. Perhaps most interestingly, we find that the satisfaction of a particular condition makes the inclusion (stress) invisible to the crack.
Reprint of "Theoretical description of metal/oxide interfacial properties: The case of MgO/Ag(001)"
NASA Astrophysics Data System (ADS)
Prada, Stefano; Giordano, Livia; Pacchioni, Gianfranco; Goniakowski, Jacek
2017-02-01
We compare the performances of different DFT functionals applied to ultra-thin MgO(100) films supported on the Ag(100) surface, a prototypical system of a weakly interacting oxide/metal interface, extensively studied in the past. Beyond semi-local DFT-GGA approximation, we also use the hybrid DFT-HSE approach to improve the description of the oxide electronic structure. Moreover, to better account for the interfacial adhesion, we include the van de Waals interactions by means of either the semi-empirical force fields by Grimme (DFT-D2 and DFT-D2*) or the self-consistent density functional optB88-vdW. We compare and discuss the results on the structural, electronic, and adhesion characteristics of the interface as obtained for pristine and oxygen-deficient Ag-supported MgO films in the 1-4 ML thickness range.
Theoretical description of metal/oxide interfacial properties: The case of MgO/Ag(001)
NASA Astrophysics Data System (ADS)
Prada, Stefano; Giordano, Livia; Pacchioni, Gianfranco; Goniakowski, Jacek
2016-12-01
We compare the performances of different DFT functionals applied to ultra-thin MgO(100) films supported on the Ag(100) surface, a prototypical system of a weakly interacting oxide/metal interface, extensively studied in the past. Beyond semi-local DFT-GGA approximation, we also use the hybrid DFT-HSE approach to improve the description of the oxide electronic structure. Moreover, to better account for the interfacial adhesion, we include the van de Waals interactions by means of either the semi-empirical force fields by Grimme (DFT-D2 and DFT-D2*) or the self-consistent density functional optB88-vdW. We compare and discuss the results on the structural, electronic, and adhesion characteristics of the interface as obtained for pristine and oxygen-deficient Ag-supported MgO films in the 1-4 ML thickness range.
Conformational heterogeneity of the calmodulin binding interface
NASA Astrophysics Data System (ADS)
Shukla, Diwakar; Peck, Ariana; Pande, Vijay S.
2016-04-01
Calmodulin (CaM) is a ubiquitous Ca2+ sensor and a crucial signalling hub in many pathways aberrantly activated in disease. However, the mechanistic basis of its ability to bind diverse signalling molecules including G-protein-coupled receptors, ion channels and kinases remains poorly understood. Here we harness the high resolution of molecular dynamics simulations and the analytical power of Markov state models to dissect the molecular underpinnings of CaM binding diversity. Our computational model indicates that in the absence of Ca2+, sub-states in the folded ensemble of CaM's C-terminal domain present chemically and sterically distinct topologies that may facilitate conformational selection. Furthermore, we find that local unfolding is off-pathway for the exchange process relevant for peptide binding, in contrast to prior hypotheses that unfolding might account for binding diversity. Finally, our model predicts a novel binding interface that is well-populated in the Ca2+-bound regime and, thus, a candidate for pharmacological intervention.
Quasi-2D Liquid State at Metal-Organic Interface and Adsorption State Manipulation
NASA Astrophysics Data System (ADS)
Mehdizadeh, Masih
The metal/organic interface between noble metal close-packed (111) surfaces and organic semiconducting molecules is studied using Scanning tunneling microscopy and Photoelectron Spectroscopy, supplemented by first principles density functional theory calculations and Markov Chain Monte Carlo simulations. Copper Phthalocyanine molecules were shown to have dual adsorption states: a liquid state where intermolecular interactions were shown to be repulsive in nature and largely due to entropic effects, and a disordered immobilized state triggered by annealing or applying a tip-sample bias larger than a certain temperature or voltage respectively where intermolecular forces were demonstrated to be attractive. A methodology for altering molecular orientation on the aforementioned surfaces is also proposed through introduction of a Fullerene C60 buffer layer. Density functional theory calculations demonstrate orientation-switching of Copper Phthalocyanine molecules based on the amount of charges transferred to/from the substrate to the C60-CuPc layers; suggesting existence of critical substrate work functions that cause reorientation.
NASA Technical Reports Server (NTRS)
Graves, M. E.; Perlmutter, M.
1974-01-01
To aid the planning of the Apollo Soyuz Test Program (ASTP), certain natural environment statistical relationships are presented, based on Markov theory and empirical counts. The practical results are in terms of conditional probability of favorable and unfavorable launch conditions at Kennedy Space Center (KSC). They are based upon 15 years of recorded weather data which are analyzed under a set of natural environmental launch constraints. Three specific forecasting problems were treated: (1) the length of record of past weather which is useful to a prediction; (2) the effect of persistence in runs of favorable and unfavorable conditions; and (3) the forecasting of future weather in probabilistic terms.
Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng
2012-01-01
Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes—OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)—are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results. PMID:22666076
Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng
2012-01-01
Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes-OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)-are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results.
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
Jensen, Lotte Groth; Bossen, Claus
2016-03-01
It remains a continual challenge to present information in user interfaces in large IT systems to support overview in the best possible way. We here examine how an electronic health record (EHR) supports the creation of overview among hospital physicians with a particular focus on the use of an interface designed to provide clinicians with a patient information overview. The overview interface integrates information flexibly from diverse places in the EHR and presents this information in one screen display. Our study revealed widespread non-use of the overview interface. We explore the reasons for its use and non-use. We conducted exploratory ethnographic fieldwork among physicians in two hospitals and gathered statistical data on their use of the overview interface. From the quantitative data, we identified where the interface was used most and conducted 18 semi-structured, open-ended interviews framed by the theoretical framework and the findings of the initial ethnographic fieldwork. We interviewed both physicians and employees from the IT units in different hospitals. We then analysed notes from the ethnographic fieldwork and the interviews and ordered these into themes forming the basis for the presentation of findings. The overview interface was most used in departments or situations where the problem at hand and the need for information could be standardized-in particular, in anesthesiological departments and outpatient clinics. However, departments with complex and long patient histories did not make much use of the overview interface. Design and layout were not mentioned as decisive factors affecting its use or non-use. Many physicians questioned the completeness of data in the overview interface-either because they were skeptical about the hospital's or the department's documentation practices, or because they could not recognize the structure of the interface. This uncertainty discouraged physicians from using the overview interface. Dedicating a specific function or interface to supporting overview works best where information needs can be standardized. The narrative and contextual nature of creating clinical overview is unlikely to be optimally supported by using the overview interface alone. The use of these kinds of interfaces requires trust in data completeness and other clinicians' and administrative staff's documentation practices, as well as an understanding of the underlying structure of the EHR and how information is filtered when data are aggregated for the interface. Copyright © 2015. Published by Elsevier Ireland Ltd.
A compositional framework for Markov processes
NASA Astrophysics Data System (ADS)
Baez, John C.; Fong, Brendan; Pollard, Blake S.
2016-03-01
We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.
NASA Astrophysics Data System (ADS)
Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard
2014-03-01
Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.
Kim, Ju-Myung; Park, Jang-Hoon; Lee, Chang Kee; Lee, Sang-Young
2014-04-08
As a promising power source to boost up advent of next-generation ubiquitous era, high-energy density lithium-ion batteries with reliable electrochemical properties are urgently requested. Development of the advanced lithium ion-batteries, however, is staggering with thorny problems of performance deterioration and safety failures. This formidable challenge is highly concerned with electrochemical/thermal instability at electrode material-liquid electrolyte interface, in addition to structural/chemical deficiency of major cell components. Herein, as a new concept of surface engineering to address the abovementioned interfacial issue, multifunctional conformal nanoencapsulating layer based on semi-interpenetrating polymer network (semi-IPN) is presented. This unusual semi-IPN nanoencapsulating layer is composed of thermally-cured polyimide (PI) and polyvinyl pyrrolidone (PVP) bearing Lewis basic site. Owing to the combined effects of morphological uniqueness and chemical functionality (scavenging hydrofluoric acid that poses as a critical threat to trigger unwanted side reactions), the PI/PVP semi-IPN nanoencapsulated-cathode materials enable significant improvement in electrochemical performance and thermal stability of lithium-ion batteries.
NASA Astrophysics Data System (ADS)
Kim, Ju-Myung; Park, Jang-Hoon; Lee, Chang Kee; Lee, Sang-Young
2014-04-01
As a promising power source to boost up advent of next-generation ubiquitous era, high-energy density lithium-ion batteries with reliable electrochemical properties are urgently requested. Development of the advanced lithium ion-batteries, however, is staggering with thorny problems of performance deterioration and safety failures. This formidable challenge is highly concerned with electrochemical/thermal instability at electrode material-liquid electrolyte interface, in addition to structural/chemical deficiency of major cell components. Herein, as a new concept of surface engineering to address the abovementioned interfacial issue, multifunctional conformal nanoencapsulating layer based on semi-interpenetrating polymer network (semi-IPN) is presented. This unusual semi-IPN nanoencapsulating layer is composed of thermally-cured polyimide (PI) and polyvinyl pyrrolidone (PVP) bearing Lewis basic site. Owing to the combined effects of morphological uniqueness and chemical functionality (scavenging hydrofluoric acid that poses as a critical threat to trigger unwanted side reactions), the PI/PVP semi-IPN nanoencapsulated-cathode materials enable significant improvement in electrochemical performance and thermal stability of lithium-ion batteries.
Kim, Ju-Myung; Park, Jang-Hoon; Lee, Chang Kee; Lee, Sang-Young
2014-01-01
As a promising power source to boost up advent of next-generation ubiquitous era, high-energy density lithium-ion batteries with reliable electrochemical properties are urgently requested. Development of the advanced lithium ion-batteries, however, is staggering with thorny problems of performance deterioration and safety failures. This formidable challenge is highly concerned with electrochemical/thermal instability at electrode material-liquid electrolyte interface, in addition to structural/chemical deficiency of major cell components. Herein, as a new concept of surface engineering to address the abovementioned interfacial issue, multifunctional conformal nanoencapsulating layer based on semi-interpenetrating polymer network (semi-IPN) is presented. This unusual semi-IPN nanoencapsulating layer is composed of thermally-cured polyimide (PI) and polyvinyl pyrrolidone (PVP) bearing Lewis basic site. Owing to the combined effects of morphological uniqueness and chemical functionality (scavenging hydrofluoric acid that poses as a critical threat to trigger unwanted side reactions), the PI/PVP semi-IPN nanoencapsulated-cathode materials enable significant improvement in electrochemical performance and thermal stability of lithium-ion batteries. PMID:24710575
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Dynamic neutron scattering from conformational dynamics. I. Theory and Markov models
NASA Astrophysics Data System (ADS)
Lindner, Benjamin; Yi, Zheng; Prinz, Jan-Hendrik; Smith, Jeremy C.; Noé, Frank
2013-11-01
The dynamics of complex molecules can be directly probed by inelastic neutron scattering experiments. However, many of the underlying dynamical processes may exist on similar timescales, which makes it difficult to assign processes seen experimentally to specific structural rearrangements. Here, we show how Markov models can be used to connect structural changes observed in molecular dynamics simulation directly to the relaxation processes probed by scattering experiments. For this, a conformational dynamics theory of dynamical neutron and X-ray scattering is developed, following our previous approach for computing dynamical fingerprints of time-correlation functions [F. Noé, S. Doose, I. Daidone, M. Löllmann, J. Chodera, M. Sauer, and J. Smith, Proc. Natl. Acad. Sci. U.S.A. 108, 4822 (2011)]. Markov modeling is used to approximate the relaxation processes and timescales of the molecule via the eigenvectors and eigenvalues of a transition matrix between conformational substates. This procedure allows the establishment of a complete set of exponential decay functions and a full decomposition into the individual contributions, i.e., the contribution of every atom and dynamical process to each experimental relaxation process.
Using hidden Markov models to align multiple sequences.
Mount, David W
2009-07-01
A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.
User interface and patient involvement.
Andreassen, Hege Kristin; Lundvoll Nilsen, Line
2013-01-01
Increased patient involvement is a goal in contemporary health care, and of importance to the development of patient oriented ICT. In this paper we discuss how the design of patient-user interfaces can affect patient involvement. Our discussion is based on 12 semi-structured interviews with patient users of a web-based solution for patient--doctor communication piloted in Norway. We argue ICT solutions offering a choice of user interfaces on the patient side are preferable to ensure individual accommodation and a high degree of patient involvement. When introducing web-based tools for patient--health professional communication a free-text option should be provided to the patient users.
Shyr, Casper; Kushniruk, Andre; van Karnebeek, Clara D M; Wasserman, Wyeth W
2016-03-01
The transition of whole-exome and whole-genome sequencing (WES/WGS) from the research setting to routine clinical practice remains challenging. With almost no previous research specifically assessing interface designs and functionalities of WES and WGS software tools, the authors set out to ascertain perspectives from healthcare professionals in distinct domains on optimal clinical genomics user interfaces. A series of semi-scripted focus groups, structured around professional challenges encountered in clinical WES and WGS, were conducted with bioinformaticians (n = 8), clinical geneticists (n = 9), genetic counselors (n = 5), and general physicians (n = 4). Contrary to popular existing system designs, bioinformaticians preferred command line over graphical user interfaces for better software compatibility and customization flexibility. Clinical geneticists and genetic counselors desired an overarching interactive graphical layout to prioritize candidate variants--a "tiered" system where only functionalities relevant to the user domain are made accessible. They favored a system capable of retrieving consistent representations of external genetic information from third-party sources. To streamline collaboration and patient exchanges, the authors identified user requirements toward an automated reporting system capable of summarizing key evidence-based clinical findings among the vast array of technical details. Successful adoption of a clinical WES/WGS system is heavily dependent on its ability to address the diverse necessities and predilections among specialists in distinct healthcare domains. Tailored software interfaces suitable for each group is likely more appropriate than the current popular "one size fits all" generic framework. This study provides interfaces for future intervention studies and software engineering opportunities. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Shyr, Casper; Kushniruk, Andre; van Karnebeek, Clara D.M.
2016-01-01
Background The transition of whole-exome and whole-genome sequencing (WES/WGS) from the research setting to routine clinical practice remains challenging. Objectives With almost no previous research specifically assessing interface designs and functionalities of WES and WGS software tools, the authors set out to ascertain perspectives from healthcare professionals in distinct domains on optimal clinical genomics user interfaces. Methods A series of semi-scripted focus groups, structured around professional challenges encountered in clinical WES and WGS, were conducted with bioinformaticians (n = 8), clinical geneticists (n = 9), genetic counselors (n = 5), and general physicians (n = 4). Results Contrary to popular existing system designs, bioinformaticians preferred command line over graphical user interfaces for better software compatibility and customization flexibility. Clinical geneticists and genetic counselors desired an overarching interactive graphical layout to prioritize candidate variants—a “tiered” system where only functionalities relevant to the user domain are made accessible. They favored a system capable of retrieving consistent representations of external genetic information from third-party sources. To streamline collaboration and patient exchanges, the authors identified user requirements toward an automated reporting system capable of summarizing key evidence-based clinical findings among the vast array of technical details. Conclusions Successful adoption of a clinical WES/WGS system is heavily dependent on its ability to address the diverse necessities and predilections among specialists in distinct healthcare domains. Tailored software interfaces suitable for each group is likely more appropriate than the current popular “one size fits all” generic framework. This study provides interfaces for future intervention studies and software engineering opportunities. PMID:26117142
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meemken, Fabian; Müller, Philipp; Hungerbühler, Konrad
Design and performance of a reactor set-up for attenuated total reflection infrared (ATR-IR) spectroscopy suitable for simultaneous reaction monitoring of bulk liquid and catalytic solid-liquid-gas interfaces under working conditions are presented. As advancement of in situ spectroscopy an operando methodology for gas-liquid-solid reaction monitoring was developed that simultaneously combines catalytic activity and molecular level detection at the catalytically active site of the same sample. Semi-batch reactor conditions are achieved with the analytical set-up by implementing the ATR-IR flow-through cell in a recycle reactor system and integrating a specifically designed gas feeding system coupled with a bubble trap. By the usemore » of only one spectrometer the design of the new ATR-IR reactor cell allows for simultaneous detection of the bulk liquid and the catalytic interface during the working reaction. Holding two internal reflection elements (IRE) the sample compartments of the horizontally movable cell are consecutively flushed with reaction solution and pneumatically actuated, rapid switching of the cell (<1 s) enables to quasi simultaneously follow the heterogeneously catalysed reaction at the catalytic interface on a catalyst-coated IRE and in the bulk liquid on a blank IRE. For a complex heterogeneous reaction, the asymmetric hydrogenation of 2,2,2-trifluoroacetophenone on chirally modified Pt catalyst the elucidation of catalytic activity/enantioselectivity coupled with simultaneous monitoring of the catalytic solid-liquid-gas interface is shown. Both catalytic activity and enantioselectivity are strongly dependent on the experimental conditions. The opportunity to gain improved understanding by coupling measurements of catalytic performance and spectroscopic detection is presented. In addition, the applicability of modulation excitation spectroscopy and phase-sensitive detection are demonstrated.« less
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Ieva, Francesca; Jackson, Christopher H; Sharples, Linda D
2017-06-01
In chronic diseases like heart failure (HF), the disease course and associated clinical event histories for the patient population vary widely. To improve understanding of the prognosis of patients and enable health care providers to assess and manage resources, we wish to jointly model disease progression, mortality and their relation with patient characteristics. We show how episodes of hospitalisation for disease-related events, obtained from administrative data, can be used as a surrogate for disease status. We propose flexible multi-state models for serial hospital admissions and death in HF patients, that are able to accommodate important features of disease progression, such as multiple ordered events and competing risks. Fully parametric and semi-parametric semi-Markov models are implemented using freely available software in R. The models were applied to a dataset from the administrative data bank of the Lombardia region in Northern Italy, which included 15,298 patients who had a first hospitalisation ending in 2006 and 4 years of follow-up thereafter. This provided estimates of the associations of age and gender with rates of hospital admission and length of stay in hospital, and estimates of the expected total time spent in hospital over five years. For example, older patients and men were readmitted more frequently, though the total time in hospital was roughly constant with age. We also discuss the relative merits of parametric and semi-parametric multi-state models, and model assessment and comparison.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Towards parameter-free classification of sound effects in movies
NASA Astrophysics Data System (ADS)
Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.
2005-08-01
The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.
Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.
Kapfer, Sebastian C; Krauth, Werner
2017-12-15
We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.
Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium
NASA Astrophysics Data System (ADS)
Kapfer, Sebastian C.; Krauth, Werner
2017-12-01
We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.
2011-06-01
effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the
Assessment of a User Guide for One Semi-Automated Forces (OneSAF) Version 2.0
2009-09-01
OneSAF uses a two-dimensional feature named a Plan View Display ( PVD ) as the primary graphical interface. The PVD replicates a map with a series...primary interface, the PVD is how the user watches the scenario unfold and requires the most interaction with the user. As seen in Table 3, all...participant indicated never using these seven map-related functions. Graphic control measures. Graphic control measures are applied to the PVD map to
Dynamics of solid nanoparticles near a liquid-liquid interface
NASA Astrophysics Data System (ADS)
Daher, Ali; Ammar, Amine; Hijazi, Abbas
2018-05-01
The liquid - liquid interface can be used as a suitable medium for generating some nanostructured films of metals, or inorganic materials such as semi conducting metals. This process can be controlled well if we study the dynamics of nanoparticles (NPs) at the liquid-liquid interface which is a new field of study, and is not understood well yet. The dynamics of NPs at liquid-liquid interfaces is investigated by solving the fluid-particle and particle-particle interactions. Our work is based on the Molecular Dynamics (MD) simulation in addition to Phase Field (PF) method. We modeled the liquid-liquid interface using the diffuse interface model, where the interface is considered to have a characteristic thickness. We have shown that the concentration gradient of one fluid in the other gives rise to a hydrodynamic force that drives the NPs to agglomerate at the interface. These obtained results may introduce new applications where certain interfaces can be considered to be suitable mediums for the synthesis of nanostructured materials. In addition, some liquid interfaces can play the role of effective filters for different species of biological NPs and solid state waste NPs, which will be very important in many industrial and biomedical domains.
Open Markov Processes and Reaction Networks
ERIC Educational Resources Information Center
Swistock Pollard, Blake Stephen
2017-01-01
We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…
NASA Astrophysics Data System (ADS)
Einalipour Eshkalak, Kasra; Sadeghzadeh, Sadegh; Jalaly, Maisam
2018-02-01
From electronic point of view, graphene resembles a metal or semi-metal and boron nitride is a dielectric material (band gap = 5.9 eV). Hybridization of these two materials opens band gap of the graphene which has expansive applications in field-effect graphene transistors. In this paper, the effect of the interface structure on the mechanical properties of a hybrid graphene/boron nitride was studied. Young's modulus, fracture strain and tensile strength of the models were simulated. Three likely types (hexagonal, octagonal and decagonal) were found for the interface of hybrid sheet after relaxation. Although Csbnd B bonds at the interface were indicated to result in more promising electrical properties, nitrogen atoms are better choice for bonding to carbon for mechanical applications.
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
ConoDictor: a tool for prediction of conopeptide superfamilies.
Koua, Dominique; Brauer, Age; Laht, Silja; Kaplinski, Lauris; Favreau, Philippe; Remm, Maido; Lisacek, Frédérique; Stöcklin, Reto
2012-07-01
ConoDictor is a tool that enables fast and accurate classification of conopeptides into superfamilies based on their amino acid sequence. ConoDictor combines predictions from two complementary approaches-profile hidden Markov models and generalized profiles. Results appear in a browser as tables that can be downloaded in various formats. This application is particularly valuable in view of the exponentially increasing number of conopeptides that are being identified. ConoDictor was written in Perl using the common gateway interface module with a php submission page. Sequence matching is performed with hmmsearch from HMMER 3 and ps_scan.pl from the pftools 2.3 package. ConoDictor is freely accessible at http://conco.ebc.ee.
Force Limited Vibration Testing
NASA Technical Reports Server (NTRS)
Scharton, Terry; Chang, Kurng Y.
2005-01-01
This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test data. A simple two degree of freedom system is shown and the governing equations for basic force limiting results for this system are reviewed. The design and results of the shuttle vibration forces (SVF) experiments are reviewed. The Advanced Composition Explorer (ACE) also was used to validate force limiting. Test instrumentation and supporting equipment are reviewed including piezo-electric force transducers, signal processing and conditioning systems, test fixtures, and vibration controller systems. Several examples of force limited vibration testing are presented with some results.
Characterization of the rat exploratory behavior in the elevated plus-maze with Markov chains.
Tejada, Julián; Bosco, Geraldine G; Morato, Silvio; Roque, Antonio C
2010-11-30
The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal. It consists of a plus-shaped maze with two open and two closed arms elevated 50cm from the floor. The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms. In this work, we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions: normal and under the effects of anxiogenic and anxiolytic drugs. The spatial structure of the elevated plus-maze is divided into squares, which are associated with states of a Markov chain. By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze, we constructed stochastic matrices for the three conditions studied. The stochastic matrices show specific patterns, which correspond to the observed behaviors of the rat under the three different conditions. For the control group, the stochastic matrix shows a clear preference for places in the closed arms. This preference is enhanced for the anxiogenic group. For the anxiolytic group, the stochastic matrix shows a pattern similar to a random walk. Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze. Copyright © 2010 Elsevier B.V. All rights reserved.
Communication: Introducing prescribed biases in out-of-equilibrium Markov models
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-03-01
Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.
From quantum heat engines to laser cooling: Floquet theory beyond the Born–Markov approximation
NASA Astrophysics Data System (ADS)
Restrepo, Sebastian; Cerrillo, Javier; Strasberg, Philipp; Schaller, Gernot
2018-05-01
We combine the formalisms of Floquet theory and full counting statistics with a Markovian embedding strategy to access the dynamics and thermodynamics of a periodically driven thermal machine beyond the conventional Born–Markov approximation. The working medium is a two-level system and we drive the tunneling as well as the coupling to one bath with the same period. We identify four different operating regimes of our machine which include a heat engine and a refrigerator. As the coupling strength with one bath is increased, the refrigerator regime disappears, the heat engine regime narrows and their efficiency and coefficient of performance decrease. Furthermore, our model can reproduce the setup of laser cooling of trapped ions in a specific parameter limit.
Foot, Natalie J; Orgeig, Sandra; Donnellan, Stephen; Bertozzi, Terry; Daniels, Christopher B
2007-07-01
Maximum-likelihood models of codon and amino acid substitution were used to analyze the lung-specific surfactant protein C (SP-C) from terrestrial, semi-aquatic, and diving mammals to identify lineages and amino acid sites under positive selection. Site models used the nonsynonymous/synonymous rate ratio (omega) as an indicator of selection pressure. Mechanistic models used physicochemical distances between amino acid substitutions to specify nonsynonymous substitution rates. Site models strongly identified positive selection at different sites in the polar N-terminal extramembrane domain of SP-C in the three diving lineages: site 2 in the cetaceans (whales and dolphins), sites 7, 9, and 10 in the pinnipeds (seals and sea lions), and sites 2, 9, and 10 in the sirenians (dugongs and manatees). The only semi-aquatic contrast to indicate positive selection at site 10 was that including the polar bear, which had the largest body mass of the semi-aquatic species. Analysis of the biophysical properties that were influential in determining the amino acid substitutions showed that isoelectric point, chemical composition of the side chain, polarity, and hydrophobicity were the crucial determinants. Amino acid substitutions at these sites may lead to stronger binding of the N-terminal domain to the surfactant phospholipid film and to increased adsorption of the protein to the air-liquid interface. Both properties are advantageous for the repeated collapse and reinflation of the lung upon diving and resurfacing and may reflect adaptations to the high hydrostatic pressures experienced during diving.
Poissonian steady states: from stationary densities to stationary intensities.
Eliazar, Iddo
2012-10-01
Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.
Poissonian steady states: From stationary densities to stationary intensities
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2012-10-01
Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.
Non-uniform solute segregation at semi-coherent metal/oxide interfaces
Choudhury, Samrat; Aguiar, Jeffery A.; Fluss, Michael J.; ...
2015-08-26
The properties and performance of metal/oxide nanocomposites are governed by the structure and chemistry of the metal/oxide interfaces. Here we report an integrated theoretical and experimental study examining the role of interfacial structure, particularly misfit dislocations, on solute segregation at a metal/oxide interface. We find that the local oxygen environment, which varies significantly between the misfit dislocations and the coherent terraces, dictates the segregation tendency of solutes to the interface. Depending on the nature of the solute and local oxygen content, segregation to misfit dislocations can change from attraction to repulsion, revealing the complex interplay between chemistry and structure atmore » metal/oxide interfaces. These findings indicate that the solute chemistry at misfit dislocations is controlled by the dislocation density and oxygen content. As a result, fundamental thermodynamic concepts – the Hume-Rothery rules and the Ellingham diagram – qualitatively predict the segregation behavior of solutes to such interfaces, providing design rules for novel interfacial chemistries.« less
On a Result for Finite Markov Chains
ERIC Educational Resources Information Center
Kulathinal, Sangita; Ghosh, Lagnojita
2006-01-01
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…
The Markov blankets of life: autonomy, active inference and the free energy principle
Palacios, Ensor; Friston, Karl; Kiverstein, Julian
2018-01-01
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. PMID:29343629
NASA Astrophysics Data System (ADS)
Feng, Yefeng; Wu, Qin; Hu, Jianbing; Xu, Zhichao; Peng, Cheng; Xia, Zexu
2018-03-01
Interface induced polarization has a significant impact on permittivity of 0–3 type polymer composites with Si based semi-conducting fillers. Polarity of Si based filler, polarity of polymer matrix and grain size of filler are closely connected with induced polarization and permittivity of composites. However, unlike 2–2 type composites, the real permittivity of Si based fillers in 0–3 type composites could be not directly measured. Therefore, achieving the theoretical permittivity of fillers in 0–3 composites through effective medium approximation (EMA) models should be very necessary. In this work, the real permittivity results of Si based semi-conducting fillers in ten different 0–3 polymer composite systems were calculated by linear fitting of simplified EMA models, based on particularity of reported parameters in those composites. The results further confirmed the proposed interface induced polarization. The results further verified significant influences of filler polarity, polymer polarity and filler size on induced polarization and permittivity of composites as well. High self-consistency was gained between present modelling and prior measuring. This work might offer a facile and effective route to achieve the difficultly measured dielectric performances of discrete filler phase in some special polymer based composite systems.
Entanglement revival can occur only when the system-environment state is not a Markov state
NASA Astrophysics Data System (ADS)
Sargolzahi, Iman
2018-06-01
Markov states have been defined for tripartite quantum systems. In this paper, we generalize the definition of the Markov states to arbitrary multipartite case and find the general structure of an important subset of them, which we will call strong Markov states. In addition, we focus on an important property of the Markov states: If the initial state of the whole system-environment is a Markov state, then each localized dynamics of the whole system-environment reduces to a localized subdynamics of the system. This provides us a necessary condition for entanglement revival in an open quantum system: Entanglement revival can occur only when the system-environment state is not a Markov state. To illustrate (a part of) our results, we consider the case that the environment is modeled as classical. In this case, though the correlation between the system and the environment remains classical during the evolution, the change of the state of the system-environment, from its initial Markov state to a state which is not a Markov one, leads to the entanglement revival in the system. This shows that the non-Markovianity of a state is not equivalent to the existence of non-classical correlation in it, in general.
Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan
2014-01-01
Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.
Optimal design of radial Bragg cavities and lasers.
Ben-Bassat, Eyal; Scheuer, Jacob
2015-07-01
We present a new and optimal design approach for obtaining maximal confinement of the field in radial Bragg cavities and lasers for TM polarization. The presented approach outperforms substantially the previously employed periodic and semi-periodic design schemes of such lasers. We show that in order to obtain maximal confinement, it is essential to consider the complete reflection properties (amplitude and phase) of the propagating radial waves at the interfaces between Bragg layers. When these properties are taken into account, we find that it is necessary to introduce a wider ("half-wavelength") layer at a specific radius in the "quarter-wavelength" radial Bragg stack. It is shown that this radius corresponds to the cylindrical equivalent of Brewster's angle. The confinement and field profile are calculated numerically by means of transfer matrix method.
Preparing Colorful Astronomical Images and Illustrations
NASA Astrophysics Data System (ADS)
Levay, Z. G.; Frattare, L. M.
2001-12-01
We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.
Economics of automation for the design-to-mask interface
NASA Astrophysics Data System (ADS)
Erck, Wesley
2009-04-01
Mask order automation has increased steadily over the years through a variety of individual mask customer implementations. These have been supported by customer-specific software at the mask suppliers to support the variety of customer output formats. Some customers use the SEMI P10 1 standard, some use supplier-specific formats, and some use customer-specific formats. Some customers use little automation and depend instead on close customer-supplier relationships. Implementations are varied in quality and effectiveness. A major factor which has prolonged the adoption of more advanced and effective solutions has been a lack of understanding of the economic benefits. Some customers think standardized automation mainly benefits the mask supplier in order entry automation, but this ignores a number of other significant benefits which differ dramatically for each party in the supply chain. This paper discusses the nature of those differing advantages and presents simple models suited to four business cases: integrated device manufacturers (IDM), fabless companies, foundries and mask suppliers. Examples and estimates of the financial advantages for these business types will be shown.
Noumbissi, Midrelle E; Galasso, Bianca; Stins, Monique F
2018-04-23
The vertebrate blood-brain barrier (BBB) is composed of cerebral microvascular endothelial cells (CEC). The BBB acts as a semi-permeable cellular interface that tightly regulates bidirectional molecular transport between blood and the brain parenchyma in order to maintain cerebral homeostasis. The CEC phenotype is regulated by a variety of factors, including cells in its immediate environment and within functional neurovascular units. The cellular composition of the brain parenchyma surrounding the CEC varies between different brain regions; this difference is clearly visible in grey versus white matter. In this review, we discuss evidence for the existence of brain vascular heterogeneity, focusing on differences between the vessels of the grey and white matter. The region-specific differences in the vasculature of the brain are reflective of specific functions of those particular brain areas. This BBB-endothelial heterogeneity may have implications for the course of pathogenesis of cerebrovascular diseases and neurological disorders involving vascular activation and dysfunction. This heterogeneity should be taken into account when developing BBB-neuro-disease models representative of specific brain areas.
Rayne, Sierra; Forest, Kaya; Friesen, Ken J
2009-08-01
A quantitative structure-activity model has been validated for estimating congener specific gas-phase hydroxyl radical reaction rates for perfluoroalkyl sulfonic acids (PFSAs), carboxylic acids (PFCAs), aldehydes (PFAls) and dihydrates, fluorotelomer olefins (FTOls), alcohols (FTOHs), aldehydes (FTAls), and acids (FTAcs), and sulfonamides (SAs), sulfonamidoethanols (SEs), and sulfonamido carboxylic acids (SAAs), and their alkylated derivatives based on calculated semi-empirical PM6 method ionization potentials. Corresponding gas-phase reaction rates with nitrate radicals and ozone have also been estimated using the computationally derived ionization potentials. Henry's law constants for these classes of perfluorinated compounds also appear to be reasonably approximated by the SPARC software program, thereby allowing estimation of wet and dry atmospheric deposition rates. Both congener specific gas-phase atmospheric and air-water interface fractionation of these compounds is expected, complicating current source apportionment perspectives and necessitating integration of such differential partitioning influences into future multimedia models. The findings will allow development and refinement of more accurate and detailed local through global scale atmospheric models for the atmospheric fate of perfluoroalkyl compounds.
Quantifying the effects of pesticide exposure on annual reproductive success of birds (presentation)
The Markov chain nest productivity model (MCnest) was developed for quantifying the effects of specific pesticide‐use scenarios on the annual reproductive success of simulated populations of birds. Each nesting attempt is divided into a series of discrete phases (e.g., egg ...
Quantifying the effects of pesticide exposure on annual reproductive success of birds
The Markov chain nest productivity model (MCnest) was developed for quantifying the effects of specific pesticide-use scenarios on the annual reproductive success of simulated populations of birds. Each nesting attempt is divided into a series of discrete phases (e.g., egg layin...
Selecting surrogate endpoints for estimating pesticide effects on avian reproductive success
A Markov chain nest productivity model (MCnest) has been developed for projecting the effects of a specific pesticide-use scenario on the annual reproductive success of avian species of concern. A critical element in MCnest is the use of surrogate endpoints, defined as measured ...
Spatial methods for deriving crop rotation history
USDA-ARS?s Scientific Manuscript database
Converting multi-year remote sensing classification data into crop rotations is beneficial by defining length of crop rotation cycles and the specific sequences of intervening crops grown between the final year of a grass seed stand and establishment of a new perennial ryegrass seed crop. Markov mod...
NASA Astrophysics Data System (ADS)
Stelian, C.; Nehari, A.; Lasloudji, I.; Lebbou, K.; Dumortier, M.; Cabane, H.; Duffar, T.
2017-10-01
Single La3Ga5.5Ta0.5O14 (LGT) crystals have been grown by using the Czochralski technique with inductive heating. Some ingots exhibit imperfections such as cracks, dislocations and striations. Numerical modeling is applied to investigate the factors affecting the shape of the crystal-melt interface during the crystallization of ingots having 3 cm in diameter. It was found that the conical shape of the interface depends essentially on the internal radiative exchanges in the semi-transparent LGT crystal. Numerical results are compared to experimental visualization of the growth interface, showing a good agreement. The effect of the forced convection produced by the crystal and crucible rotation is numerically investigated at various rotation rates. Increasing the crystal rotation rate up to 50 rpm has a significant flattening effect on the interface shape. Applying only crucible rotation enhances the downward flow underneath the crystal, leading to an increased interface curvature. Counter rotation between the crystal and the crucible results in a distorted shape of the interface.
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
Modeling strategic use of human computer interfaces with novel hidden Markov models
Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.
2015-01-01
Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026
Analysis of 2D hyperbolic metamaterial dispersion by elementary excitation coupling
NASA Astrophysics Data System (ADS)
Vaianella, Fabio; Maes, Bjorn
2016-04-01
Hyperbolic metamaterials are examined for many applications thanks to the large density of states and extreme confinement of light they provide. For classical hyperbolic metal/dielectric multilayer structures, it was demon- strated that the properties originate from a specific coupling of the surface plasmon polaritons between the metal/dielectric interfaces. We show a similar analysis for 2D hyperbolic arrays of square (or rectangular) silver nanorods in a TiO2 host. In this case the properties derive from a specific coupling of the plasmons carried by the corners of the nanorods. The dispersion can be seen as the coupling of single rods for a through-metal connection of the corners, as the coupling of structures made of four semi-infinite metallic blocks separated by dielectric for a through-dielectric connection, or as the coupling of two semi-infinite rods for a through-metal and through-dielectric situation. For arrays of small square nanorods the elementary structure that explains the dispersion of the array is the single rod, and for arrays of large square nanorods it is four metallic corners. The medium size square nanorod case is more complicated, because the elementary structure can be one of the three basic designs, depending on the frequency and symmetry of the modes. Finally, we show that for arrays of rectangular nanorods the dispersion is explained by coupling of the two coupled rod structure. This work opens the way for a better understanding of a wide class of metamaterials via their elementary excitations.
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2016-01-01
In this paper, a new Navier–Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier–Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented. PMID:27087702
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2015-08-15
In this paper, a new Navier-Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier-Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented.
A Markovian model of evolving world input-output network
Isacchini, Giulio
2017-01-01
The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145
Nanofluidic interfaces in microfluidic networks
Millet, Larry J.; Doktycz, Mitchel John; Retterer, Scott T.
2015-09-24
The integration of nano- and microfluidic technologies enables the construction of tunable interfaces to physical and biological systems across relevant length scales. The ability to perform chemical manipulations of miniscule sample volumes is greatly enhanced through these technologies and extends the ability to manipulate and sample the local fluidic environments at subcellular, cellular and community or tissue scales. Here we describe the development of a flexible surface micromachining process for the creation of nanofluidic channel arrays integrated within SU-8 microfluidic networks. The use of a semi-porous, silicon rich, silicon nitride structural layer allows rapid release of the sacrificial silicon dioxidemore » during the nanochannel fabrication. Nanochannel openings that form the interface to biological samples are customized using focused ion beam milling. The compatibility of these interfaces with on-chip microbial culture is demonstrated.« less
Derivation of Markov processes that violate detailed balance
NASA Astrophysics Data System (ADS)
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
Identifying and correcting non-Markov states in peptide conformational dynamics
NASA Astrophysics Data System (ADS)
Nerukh, Dmitry; Jensen, Christian H.; Glen, Robert C.
2010-02-01
Conformational transitions in proteins define their biological activity and can be investigated in detail using the Markov state model. The fundamental assumption on the transitions between the states, their Markov property, is critical in this framework. We test this assumption by analyzing the transitions obtained directly from the dynamics of a molecular dynamics simulated peptide valine-proline-alanine-leucine and states defined phenomenologically using clustering in dihedral space. We find that the transitions are Markovian at the time scale of ≈50 ps and longer. However, at the time scale of 30-40 ps the dynamics loses its Markov property. Our methodology reveals the mechanism that leads to non-Markov behavior. It also provides a way of regrouping the conformations into new states that now possess the required Markov property of their dynamics.
NASA Astrophysics Data System (ADS)
Qin, Y.; Lu, P.; Li, Z.
2018-04-01
Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.
Characterizing Giant Exoplanets through Multiwavelength Transit Observations: HAT-P-5 b
NASA Astrophysics Data System (ADS)
PeQueen, David Jeffrey; Cole, Jackson Lane; Gardner, Cristilyn N.; Garver, Bethany Ray; Jarka, Kyla L.; Kar, Aman; McGough, Aylin M.; Rivera, Daniel Ivan; Kasper, David; Jang-Condell, Hannah; Kobulnicky, Henry; Dale, Daniel
2018-01-01
During the summer of 2017, we observed hot Jupiter-type exoplanet transit events using the Wyoming Infrared Observatory’s 2.3 meter telescope. We observed 14 unique exoplanets during transit events; one such target was HAT-P-5 b. In total, we collected 53 usable science images in the Sloan filter set, particularly with the g’, r’, z’, and i’ band wavelength filters. This exoplanet transited approximately 40 minutes earlier than the currently published literature suggests. After reducing the data and running a Markov chain Monte Carlo analysis, we present results describing the planetary radius, semi-major axis, orbital period, and inclination of HAT-P-5 b. Characteristics of Rayleigh scattering are present in the atmosphere of this exoplanet. This work is supported by the National Science Foundation under REU grant AST 1560461.
Simulation of precipitation by weather pattern and frontal analysis
NASA Astrophysics Data System (ADS)
Wilby, Robert
1995-12-01
Daily rainfall from two sites in central and southern England was stratified according to the presence or absence of weather fronts and then cross-tabulated with the prevailing Lamb Weather Type (LWT). A semi-Markov chain model was developed for simulating daily sequences of LWTs from matrices of transition probabilities between weather types for the British Isles 1970-1990. Daily and annual rainfall distributions were then simulated from the prevailing LWTs using historic conditional probabilities for precipitation occurrence and frontal frequencies. When compared with a conventional rainfall generator the frontal model produced improved estimates of the overall size distribution of daily rainfall amounts and in particular the incidence of low-frequency high-magnitude totals. Further research is required to establish the contribution of individual frontal sub-classes to daily rainfall totals and of long-term fluctuations in frontal frequencies to conditional probabilities.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)
NASA Astrophysics Data System (ADS)
Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.
2018-05-01
A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.
Perspective: Markov models for long-timescale biomolecular dynamics.
Schwantes, C R; McGibbon, R T; Pande, V S
2014-09-07
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.
A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain
NASA Astrophysics Data System (ADS)
Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut
2017-08-01
Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.
Mixture Hidden Markov Models in Finance Research
NASA Astrophysics Data System (ADS)
Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia
Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.
3D abnormal behavior recognition in power generation
NASA Astrophysics Data System (ADS)
Wei, Zhenhua; Li, Xuesen; Su, Jie; Lin, Jie
2011-06-01
So far most research of human behavior recognition focus on simple individual behavior, such as wave, crouch, jump and bend. This paper will focus on abnormal behavior with objects carrying in power generation. Such as using mobile communication device in main control room, taking helmet off during working and lying down in high place. Taking account of the color and shape are fixed, we adopted edge detecting by color tracking to recognize object in worker. This paper introduces a method, which using geometric character of skeleton and its angle to express sequence of three-dimensional human behavior data. Then adopting Semi-join critical step Hidden Markov Model, weighing probability of critical steps' output to reduce the computational complexity. Training model for every behavior, mean while select some skeleton frames from 3D behavior sample to form a critical step set. This set is a bridge linking 2D observation behavior with 3D human joints feature. The 3D reconstruction is not required during the 2D behavior recognition phase. In the beginning of recognition progress, finding the best match for every frame of 2D observed sample in 3D skeleton set. After that, 2D observed skeleton frames sample will be identified as a specifically 3D behavior by behavior-classifier. The effectiveness of the proposed algorithm is demonstrated with experiments in similar power generation environment.
NASA Astrophysics Data System (ADS)
Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Thompson, D. C.; Zhu, L.
2006-11-01
The Utah State University Gauss-Markov Kalman Filter (GMKF) was developed as part of the Global Assimilation of Ionospheric Measurements (GAIM) program. The GMKF uses a physics-based model of the ionosphere and a Gauss-Markov Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) observations. The physics-based model is the Ionospheric Forecast Model (IFM), which accounts for five ion species and covers the E region, F region, and the topside from 90 to 1400 km altitude. Within the GMKF, the IFM derived ionospheric densities constitute a background density field on which perturbations are superimposed based on the available data and their errors. In the current configuration, the GMKF assimilates slant total electron content (TEC) from a variable number of global positioning satellite (GPS) ground sites, bottomside electron density (Ne) profiles from a variable number of ionosondes, in situ Ne from four Defense Meteorological Satellite Program (DMSP) satellites, and nighttime line-of-sight ultraviolet (UV) radiances measured by satellites. To test the GMKF for real-time operations and to validate its ionospheric density specifications, we have tested the model performance for a variety of geophysical conditions. During these model runs various combination of data types and data quantities were assimilated. To simulate real-time operations, the model ran continuously and automatically and produced three-dimensional global electron density distributions in 15 min increments. In this paper we will describe the Gauss-Markov Kalman filter model and present results of our validation study, with an emphasis on comparisons with independent observations.
Uncertainty in dual permeability model parameters for structured soils.
Arora, B; Mohanty, B P; McGuire, J T
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface ( K sa ) and macropore tortuosity ( l f ) but also of other parameters of the matrix and macropore domains.
Uncertainty in dual permeability model parameters for structured soils
NASA Astrophysics Data System (ADS)
Arora, B.; Mohanty, B. P.; McGuire, J. T.
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.
NASA Astrophysics Data System (ADS)
Conlan, Hilary; Grant, Rachel
2013-04-01
When tectonic stresses build up in the Earth's crust, highly mobile electronic charge carriers are activated which cause a range of follow-on reactions when they arrive at the Earth's surface, primarily air ionization and at the rock-water interface oxidation of water to hydrogen peroxide. Anecdotal reports of many earthworms appearing at the ground surface and behavioural changes in toads before earthquakes suggests that these animals may be reacting to environmental changes. This paper reports the results of experimental work, with subterranean and semi-aquatic invertebrates, simulating these pre-earthquake changes.
Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias
2012-01-01
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
Oxide surfaces and metal/oxide interfaces studied by grazing incidence X-ray scattering
NASA Astrophysics Data System (ADS)
Renaud, Gilles
Experimental determinations of the atomic structure of insulating oxide surfaces and metal/oxide interfaces are scarce, because surface science techniques are often limited by the insulating character of the substrate. Grazing incidence X-ray scattering (GIXS), which is not subject to charge effects, can provide very precise information on the atomic structure of oxide surfaces: roughness, relaxation and reconstruction. It is also well adapted to analyze the atomic structure, the registry, the misfit relaxation, elastic or plastic, the growth mode and the morphology of metal/oxide interfaces during their growth, performed in situ. GIXS also allows the analysis of thin films and buried interfaces, in a non-destructive way, yielding the epitaxial relationships, and, by variation of the grazing incidence angle, the lattice parameter relaxation along the growth direction. On semi-coherent interfaces, the existence of an ordered network of interfacial misfit dislocations can be demonstrated, its Burger's vector determined, its ordering during in situ annealing cycles followed, and sometimes even its atomic structure can be addressed. Careful analysis during growth allows the modeling of the dislocation nucleation process. This review emphasizes the new information that GIXS can bring to oxide surfaces and metal/oxide interfaces by comparison with other surface science techniques. The principles of X-ray diffraction by surfaces and interfaces are recalled, together with the advantages and properties of grazing angles. The specific experimental requirements are discussed. Recent results are presented on the determination of the atomic structure of relaxed or reconstructed oxide surfaces. A description of results obtained during the in situ growth of metal on oxide surfaces is also given, as well as investigations of thick metal films on oxide surfaces, with lattice parameter misfit relaxed by an array of dislocations. Recent work performed on oxide thin films having important physical properties such as superconductivity or magnetism is also briefly reviewed. The strengths and limitations of the technique, such as the need for single crystals and surfaces of high crystalline quality are discussed. Finally, an outlook of future prospects in the field is given, such as the study of more complex oxide surfaces, vicinal surfaces, reactive metal/oxide interfaces, metal oxidation processes, the use of surfactants to promote wetting of a metal deposited on an oxide surface or the study of oxide/liquid interfaces in a non-UHV environment.
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cazeca, Mario J.; Medich, David C.; Munro, John J. III
2010-08-15
Purpose: To study the effects of the breast-air and breast-lung interfaces on the absorbed dose within the planning target volume (PTV) of a MammoSite balloon dose delivery system as well as the effect of contrast material on the dose rate in the PTV. Methods: The Monte Carlo MCNP5 code was used to simulate dose rate in the PTV of a 2 cm radius MammoSite balloon dose delivery system. The simulations were carried out using an average female chest phantom (AFCP) and a semi-infinite water phantom for both Yb-169 and Ir-192 high dose rate sources for brachytherapy application. Gastrografin was introducedmore » at varying concentrations to study the effect of contrast material on the dose rate in the PTV. Results: The effect of the density of the materials surrounding the MammoSite balloon containing 0% contrast material on the calculated dose rate at different radial distances in the PTV was demonstrated. Within the PTV, the ratio of the calculated dose rate for the AFCP and the semi-infinite water phantom for the point closest to the breast-air interface (90 deg.) is less than that for the point closest to the breast-lung interface (270 deg.) by 11.4% and 4% for the HDR sources of Yb-169 and Ir-192, respectively. When contrast material was introduced into the 2 cm radius MammoSite balloon at varying concentrations, (5%, 10%, 15%, and 20%), the dose rate in the AFCP at 3.0 cm radial distance at 90 deg. was decreased by as much as 14.8% and 6.2% for Yb-169 and Ir-192, respectively, when compared to that of the semi-infinite water phantom with contrast concentrations of 5%, 10%, 15%, and 20%, respectively. Conclusions: Commercially available software used to calculate dose rate in the PTV of a MammoSite balloon needs to account for patient anatomy and density of surrounding materials in the dosimetry analyses in order to avoid patient underdose.« less
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.
Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís
2010-01-01
This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.
The PhytoClust tool for metabolic gene clusters discovery in plant genomes
Fuchs, Lisa-Maria
2017-01-01
Abstract The existence of Metabolic Gene Clusters (MGCs) in plant genomes has recently raised increased interest. Thus far, MGCs were commonly identified for pathways of specialized metabolism, mostly those associated with terpene type products. For efficient identification of novel MGCs, computational approaches are essential. Here, we present PhytoClust; a tool for the detection of candidate MGCs in plant genomes. The algorithm employs a collection of enzyme families related to plant specialized metabolism, translated into hidden Markov models, to mine given genome sequences for physically co-localized metabolic enzymes. Our tool accurately identifies previously characterized plant MGCs. An exhaustive search of 31 plant genomes detected 1232 and 5531 putative gene cluster types and candidates, respectively. Clustering analysis of putative MGCs types by species reflected plant taxonomy. Furthermore, enrichment analysis revealed taxa- and species-specific enrichment of certain enzyme families in MGCs. When operating through our web-interface, PhytoClust users can mine a genome either based on a list of known cluster types or by defining new cluster rules. Moreover, for selected plant species, the output can be complemented by co-expression analysis. Altogether, we envisage PhytoClust to enhance novel MGCs discovery which will in turn impact the exploration of plant metabolism. PMID:28486689
The PhytoClust tool for metabolic gene clusters discovery in plant genomes.
Töpfer, Nadine; Fuchs, Lisa-Maria; Aharoni, Asaph
2017-07-07
The existence of Metabolic Gene Clusters (MGCs) in plant genomes has recently raised increased interest. Thus far, MGCs were commonly identified for pathways of specialized metabolism, mostly those associated with terpene type products. For efficient identification of novel MGCs, computational approaches are essential. Here, we present PhytoClust; a tool for the detection of candidate MGCs in plant genomes. The algorithm employs a collection of enzyme families related to plant specialized metabolism, translated into hidden Markov models, to mine given genome sequences for physically co-localized metabolic enzymes. Our tool accurately identifies previously characterized plant MGCs. An exhaustive search of 31 plant genomes detected 1232 and 5531 putative gene cluster types and candidates, respectively. Clustering analysis of putative MGCs types by species reflected plant taxonomy. Furthermore, enrichment analysis revealed taxa- and species-specific enrichment of certain enzyme families in MGCs. When operating through our web-interface, PhytoClust users can mine a genome either based on a list of known cluster types or by defining new cluster rules. Moreover, for selected plant species, the output can be complemented by co-expression analysis. Altogether, we envisage PhytoClust to enhance novel MGCs discovery which will in turn impact the exploration of plant metabolism. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
Regenerative Simulation of Harris Recurrent Markov Chains.
1982-07-01
Sutijle) S. TYPE OF REPORT A PERIOD COVERED REGENERATIVE SIMULATION OF HARRIS RECURRENT Technical Report MARKOV CHAINS 14. PERFORMING ORG. REPORT NUMBER...7 AD-Ag 251 STANFORD UNIV CA DEPT OF OPERATIONS RESEARCH /s i2/ REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS,(U) JUL 82 P W GLYNN N0001...76-C-0578 UNtLASSIFIED TR-62 NL EhhhIhEEEEEEI EEEEEIIIIIII REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS by Peter W. Glynn TECHNICAL
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Lee, Kyung-Eun; Park, Hyun-Seok
2015-01-01
Epigenetic computational analyses based on Markov chains can integrate dependencies between regions in the genome that are directly adjacent. In this paper, the BED files of fifteen chromatin states of the Broad Histone Track of the ENCODE project are parsed, and comparative nucleotide frequencies of regional chromatin blocks are thoroughly analyzed to detect the Markov property in them. We perform various tests to examine the Markov property embedded in a frequency domain by checking for the presence of the Markov property in the various chromatin states. We apply these tests to each region of the fifteen chromatin states. The results of our simulation indicate that some of the chromatin states possess a stronger Markov property than others. We discuss the significance of our findings in statistical models of nucleotide sequences that are necessary for the computational analysis of functional units in noncoding DNA.
NASA Astrophysics Data System (ADS)
Prostomolotov, A. I.; Verezub, N. A.; Voloshin, A. E.
2014-09-01
A thermo-gravitational convection and impurity transfer in the melt were investigated using a simplified numerical model for Bridgman GaSb(Te) crystal growth in microgravity conditions. Simplifications were as follows: flat melt/crystal interface, fixed melt sizes and only lateral ampoule heating. Calculations were carried out by Ansys®Fluent® code employing a two-dimensional Navier-Stokes-Boussinesq and heat and mass transfer equations in a coordinate system moving with the melt/crystal interface. The parametric dependence of the effective segregation coefficient Keff at the melt/crystal interface was studied for various ampoule sizes and for microgravity conditions. For the uprising one-vortex flow, the resulting dependences were presented as Keff vs. Vmax-the maximum velocity value. These dependences were compared with the formulas by Burton-Prim-Slichter's, Ostrogorsky-Muller's, as well as with the semi-analytical solutions.
Gradient nano-engineered in situ forming composite hydrogel for osteochondral regeneration.
Radhakrishnan, Janani; Manigandan, Amrutha; Chinnaswamy, Prabu; Subramanian, Anuradha; Sethuraman, Swaminathan
2018-04-01
Fabrication of anisotropic osteochondral-mimetic scaffold with mineralized subchondral zone and gradient interface remains challenging. We have developed an injectable semi-interpenetrating network hydrogel construct with chondroitin sulfate nanoparticles (ChS-NPs) and nanohydroxyapatite (nHA) (∼30-90 nm) in chondral and subchondral hydrogel zones respectively. Mineralized subchondral hydrogel exhibited significantly higher osteoblast proliferation and alkaline phosphatase activity (p < 0.05). Osteochondral hydrogel exhibited interconnected porous structure and spatial variation with gradient interface of nHA and ChS-NPs. Microcomputed tomography (μCT) demonstrated nHA gradation while rheology showed predominant elastic modulus (∼930 Pa) at the interface. Co-culture of osteoblasts and chondrocytes in gradient hydrogels showed layer-specific retention of cells and cell-cell interaction at the interface. In vivo osteochondral regeneration by biphasic (nHA or ChS) and gradient (nHA + ChS) hydrogels was compared with control using rabbit osteochondral defect after 3 and 8 weeks. Complete closure of defect was observed in gradient (8 weeks) while defect remained in other groups. Histology demonstrated collagen and glycosaminoglycan deposition in neo-matrix and presence of hyaline cartilage-characteristic matrix, chondrocytes and osteoblasts. μCT showed mineralized neo-tissue formation, which was confined within the defect with higher bone mineral density in gradient (chondral: 0.42 ± 0.07 g/cc, osteal: 0.64 ± 0.08 g/cc) group. Further, biomechanical push-out studies showed significantly higher load for gradient group (378 ± 56 N) compared to others. Thus, the developed nano-engineered gradient hydrogel enhanced hyaline cartilage regeneration with subchondral bone formation and lateral host-tissue integration. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
Ultrasonic semi-solid coating soldering 6061 aluminum alloys with Sn-Pb-Zn alloys.
Yu, Xin-ye; Xing, Wen-qing; Ding, Min
2016-07-01
In this paper, 6061 aluminum alloys were soldered without a flux by the ultrasonic semi-solid coating soldering at a low temperature. According to the analyses, it could be obtained that the following results. The effect of ultrasound on the coating which promoted processes of metallurgical reaction between the components of the solder and 6061 aluminum alloys due to the thermal effect. Al2Zn3 was obtained near the interface. When the solder was in semi-solid state, the connection was completed. Ultimately, the interlayer mainly composed of three kinds of microstructure zones: α-Pb solid solution phases, β-Sn phases and Sn-Pb eutectic phases. The strength of the joints was improved significantly with the minimum shear strength approaching 101MPa. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Kirchhoff, Michael
2018-03-01
Ramstead MJD, Badcock PB, Friston KJ. Answering Schrödinger's question: A free-energy formulation. Phys Life Rev 2018. https://doi.org/10.1016/j.plrev.2017.09.001 [this issue] motivate a multiscale characterisation of living systems in terms of hierarchically structured Markov blankets - a view of living systems as comprised of Markov blankets of Markov blankets [1-4]. It is effectively a treatment of what life is and how it is realised, cast in terms of how Markov blankets of living systems self-organise via active inference - a corollary of the free energy principle [5-7].
Modeling Hubble Space Telescope flight data by Q-Markov cover identification
NASA Technical Reports Server (NTRS)
Liu, K.; Skelton, R. E.; Sharkey, J. P.
1992-01-01
A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.
2013-02-06
This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing lowmore » to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way. Another interesting observation is that the recognized ‘overdispersion’ problem in daily rainfall simulation ascribes more to the loss of rainfall extremes than the under-representation of first-order persistence. The developed generator appears to be a sound option for daily rainfall simulation, especially in particular hydrologic planning situations when rare rainfall events are of great importance.« less
Formal specification of human-computer interfaces
NASA Technical Reports Server (NTRS)
Auernheimer, Brent
1990-01-01
A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.
NASA Astrophysics Data System (ADS)
Chou, Po-Chien; Hsieh, Ting-En; Cheng, Stone; del Alamo, Jesús A.; Chang, Edward Yi
2018-05-01
This study comprehensively analyzed the reliability of trapping and hot-electron effects responsible for the dynamic on-resistance (Ron) of GaN-based metal–insulator–semiconductor high electron mobility transistors. Specifically, this study performed the following analyses. First, we developed the on-the-fly Ron measurement to analyze the effects of traps during stress. With this technique, the faster one (with a pulse period of 20 ms) can characterize the degradation; the transient behavior could be monitored accurately by such short measurement pulse. Then, dynamic Ron transients were investigated under different bias conditions, including combined off state stress conditions, back-gating stress conditions, and semi-on stress conditions, in separate investigations of surface- and buffer-, and hot-electron-related trapping effects. Finally, the experiments showed that the Ron increase in semi-on state is significantly correlated with the high drain voltage and relatively high current levels (compared with the off-state current), involving the injection of greater amount of hot electrons from the channel into the AlGaN/insulator interface and the GaN buffer. These findings provide a path for device engineering to clarify the possible origins for electron traps and to accelerate the development of emerging GaN technologies.
Ueda, Jun; Yoshimura, Hajime; Shimizu, Keiji; Hino, Megumu; Kohara, Nobuo
2017-07-01
Visual and semi-quantitative assessments of 123 I-FP-CIT single-photon emission computed tomography (SPECT) are useful for the diagnosis of dopaminergic neurodegenerative diseases (dNDD), including Parkinson's disease, dementia with Lewy bodies, progressive supranuclear palsy, multiple system atrophy, and corticobasal degeneration. However, the diagnostic value of combined visual and semi-quantitative assessment in dNDD remains unclear. Among 239 consecutive patients with a newly diagnosed possible parkinsonian syndrome who underwent 123 I-FP-CIT SPECT in our medical center, 114 patients with a disease duration less than 7 years were diagnosed as dNDD with the established criteria or as non-dNDD according to clinical judgment. We retrospectively examined their clinical characteristics and visual and semi-quantitative assessments of 123 I-FP-CIT SPECT. The striatal binding ratio (SBR) was used as a semi-quantitative measure of 123 I-FP-CIT SPECT. We calculated the sensitivity and specificity of visual assessment alone, semi-quantitative assessment alone, and combined visual and semi-quantitative assessment for the diagnosis of dNDD. SBR was correlated with visual assessment. Some dNDD patients with a normal visual assessment had an abnormal SBR, and vice versa. There was no statistically significant difference between sensitivity of the diagnosis with visual assessment alone and semi-quantitative assessment alone (91.2 vs. 86.8%, respectively, p = 0.29). Combined visual and semi-quantitative assessment demonstrated superior sensitivity (96.7%) to visual assessment (p = 0.03) or semi-quantitative assessment (p = 0.003) alone with equal specificity. Visual and semi-quantitative assessments of 123 I-FP-CIT SPECT are helpful for the diagnosis of dNDD, and combined visual and semi-quantitative assessment shows superior sensitivity with equal specificity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smyth, Padhraic
2013-07-22
This is the final report for a DOE-funded research project describing the outcome of research on non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. The main results consist of extensive development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies ofmore » climate variability in terms of the dynamics of atmospheric flow regimes.« less
Inferring phenomenological models of Markov processes from data
NASA Astrophysics Data System (ADS)
Rivera, Catalina; Nemenman, Ilya
Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.
Supervised self-organization of homogeneous swarms using ergodic projections of Markov chains.
Chattopadhyay, Ishanu; Ray, Asok
2009-12-01
This paper formulates a self-organization algorithm to address the problem of global behavior supervision in engineered swarms of arbitrarily large population sizes. The swarms considered in this paper are assumed to be homogeneous collections of independent identical finite-state agents, each of which is modeled by an irreducible finite Markov chain. The proposed algorithm computes the necessary perturbations in the local agents' behavior, which guarantees convergence to the desired observed state of the swarm. The ergodicity property of the swarm, which is induced as a result of the irreducibility of the agent models, implies that while the local behavior of the agents converges to the desired behavior only in the time average, the overall swarm behavior converges to the specification and stays there at all times. A simulation example illustrates the underlying concept.
NASA Astrophysics Data System (ADS)
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
LECTURES ON GAME THEORY, MARKOV CHAINS, AND RELATED TOPICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, G L
1958-03-01
Notes on nine lectures delivered at Sandin Corporation in August 1957 are given. Part one contains the manuscript of a paper concerning a judging problem. Part two is concerned with finite Markov-chain theory amd discusses regular Markov chains, absorbing Markov chains, the classification of states, application to the Leontief input-output model, and semimartingales. Part three contains notes on game theory and covers matrix games, the effect of psychological attitudes on the outcomes of games, extensive games, amd matrix theory applied to mathematical economics. (auth)
Markov chains: computing limit existence and approximations with DNA.
Cardona, M; Colomer, M A; Conde, J; Miret, J M; Miró, J; Zaragoza, A
2005-09-01
We present two algorithms to perform computations over Markov chains. The first one determines whether the sequence of powers of the transition matrix of a Markov chain converges or not to a limit matrix. If it does converge, the second algorithm enables us to estimate this limit. The combination of these algorithms allows the computation of a limit using DNA computing. In this sense, we have encoded the states and the transition probabilities using strands of DNA for generating paths of the Markov chain.
NASA Technical Reports Server (NTRS)
Bishop, William L. (Inventor); Mcleod, Kathleen A. (Inventor); Mattauch, Robert J. (Inventor)
1991-01-01
A Schottky diode for millimeter and submillimeter wave applications is comprised of a multi-layered structure including active layers of gallium arsenide on a semi-insulating gallium arsenide substrate with first and second insulating layers of silicon dioxide on the active layers of gallium arsenide. An ohmic contact pad lays on the silicon dioxide layers. An anode is formed in a window which is in and through the silicon dioxide layers. An elongated contact finger extends from the pad to the anode and a trench, preferably a transverse channel or trench of predetermined width, is formed in the active layers of the diode structure under the contact finger. The channel extends through the active layers to or substantially to the interface of the semi-insulating gallium arsenide substrate and the adjacent gallium arsenide layer which constitutes a buffer layer. Such a structure minimizes the effect of the major source of shunt capacitance by interrupting the current path between the conductive layers beneath the anode contact pad and the ohmic contact. Other embodiments of the diode may substitute various insulating or semi-insulating materials for the silicon dioxide, various semi-conductors for the active layers of gallium arsenide, and other materials for the substrate, which may be insulating or semi-insulating.
Influence of GaAs surface termination on GaSb/GaAs quantum dot structure and band offsets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zech, E. S.; Chang, A. S.; Martin, A. J.
2013-08-19
We have investigated the influence of GaAs surface termination on the nanoscale structure and band offsets of GaSb/GaAs quantum dots (QDs) grown by molecular-beam epitaxy. Transmission electron microscopy reveals both coherent and semi-coherent clusters, as well as misfit dislocations, independent of surface termination. Cross-sectional scanning tunneling microscopy and spectroscopy reveal clustered GaSb QDs with type I band offsets at the GaSb/GaAs interfaces. We discuss the relative influences of strain and QD clustering on the band offsets at GaSb/GaAs interfaces.
New Approaches in Force-Limited Vibration Testing of Flight Hardware
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Kern, Dennis L.
2012-01-01
To qualify flight hardware for random vibration environments the following methods are used to limit the loads in the aerospace industry: (1) Response limiting and notching (2) Simple TDOF model (3) Semi-empirical force limits (4) Apparent mass, etc. and (5) Impedance method. In all these methods attempts are made to remove conservatism due to the mismatch in impedances between the test and the flight configurations of the hardware that are being qualified. Assumption is the hardware interfaces have correlated responses. A new method that takes into account the un-correlated hardware interface responses are described in this presentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, A.W.; Ghil, M.; Kravtsov, K.
2011-04-08
This project was a continuation of previous work under DOE CCPP funding in which we developed a twin approach of non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. We have developed a family of latent-variable NHMMs to simulate historical records of daily rainfall, and used them to downscale seasonal predictions. We have also developed empirical mode reduction (EMR) models for gaining insight into the underlying dynamics in observational data and general circulation model (GCM) simulations. Using coupled O-A ICMs,more » we have identified a new mechanism of interdecadal climate variability, involving the midlatitude oceans mesoscale eddy field and nonlinear, persistent atmospheric response to the oceanic anomalies. A related decadal mode is also identified, associated with the oceans thermohaline circulation. The goal of the continuation was to build on these ICM results and NHMM/EMR model developments and software to strengthen two key pillars of support for the development and application of climate models for climate change projections on time scales of decades to centuries, namely: (a) dynamical and theoretical understanding of decadal-to-interdecadal oscillations and their predictability; and (b) an interface from climate models to applications, in order to inform societal adaptation strategies to climate change at the regional scale, including model calibration, correction, downscaling and, most importantly, assessment and interpretation of spread and uncertainties in multi-model ensembles. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies of climate variability in terms of the dynamics of atmospheric flow regimes. Each of these project components is elaborated on below, followed by a list of publications resulting from the grant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravtsov, S.; Robertson, Andrew W.; Ghil, Michael
2011-04-08
This project was a continuation of previous work under DOE CCPP funding in which we developed a twin approach of non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. We have developed a family of latent-variable NHMMs to simulate historical records of daily rainfall, and used them to downscale seasonal predictions. We have also developed empirical mode reduction (EMR) models for gaining insight into the underlying dynamics in observational data and general circulation model (GCM) simulations. Using coupled O-A ICMs,more » we have identified a new mechanism of interdecadal climate variability, involving the midlatitude oceans mesoscale eddy field and nonlinear, persistent atmospheric response to the oceanic anomalies. A related decadal mode is also identified, associated with the oceans thermohaline circulation. The goal of the continuation was to build on these ICM results and NHMM/EMR model developments and software to strengthen two key pillars of support for the development and application of climate models for climate change projections on time scales of decades to centuries, namely: (a) dynamical and theoretical understanding of decadal-to-interdecadal oscillations and their predictability; and (b) an interface from climate models to applications, in order to inform societal adaptation strategies to climate change at the regional scale, including model calibration, correction, downscaling and, most importantly, assessment and interpretation of spread and uncertainties in multi-model ensembles. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies of climate variability in terms of the dynamics of atmospheric flow regimes. Each of these project components is elaborated on below, followed by a list of publications resulting from the grant.« less
NASA Technical Reports Server (NTRS)
McCartney, Patrick; MacLean, John
2012-01-01
mREST is an implementation of the REST architecture specific to the management and sharing of data in a system of logical elements. The purpose of this document is to clearly define the mREST interface protocol. The interface protocol covers all of the interaction between mREST clients and mREST servers. System-level requirements are not specifically addressed. In an mREST system, there are typically some backend interfaces between a Logical System Element (LSE) and the associated hardware/software system. For example, a network camera LSE would have a backend interface to the camera itself. These interfaces are specific to each type of LSE and are not covered in this document. There are also frontend interfaces that may exist in certain mREST manager applications. For example, an electronic procedure execution application may have a specialized interface for configuring the procedures. This interface would be application specific and outside of this document scope. mREST is intended to be a generic protocol which can be used in a wide variety of applications. A few scenarios are discussed to provide additional clarity but, in general, application-specific implementations of mREST are not specifically addressed. In short, this document is intended to provide all of the information necessary for an application developer to create mREST interface agents. This includes both mREST clients (mREST manager applications) and mREST servers (logical system elements, or LSEs).
Lu, Jun; Bushel, Pierre R.
2013-01-01
RNA sequencing (RNA-Seq) allows for the identification of novel exon-exon junctions and quantification of gene expression levels. We show that from RNA-Seq data one may also detect utilization of alternative polyadenylation (APA) in 3′ untranslated regions (3′ UTRs) known to play a critical role in the regulation of mRNA stability, cellular localization and translation efficiency. Given the dynamic nature of APA, it is desirable to examine the APA on a sample by sample basis. We used a Poisson hidden Markov model (PHMM) of RNA-Seq data to identify potential APA in human liver and brain cortex tissues leading to shortened 3′ UTRs. Over three hundred transcripts with shortened 3′ UTRs were detected with sensitivity >75% and specificity >60%. tissue-specific 3′ UTR shortening was observed for 32 genes with a q-value ≤ 0.1. When compared to alternative isoforms detected by Cufflinks or MISO, our PHMM method agreed on over 100 transcripts with shortened 3′ UTRs. Given the increasing usage of RNA-Seq for gene expression profiling, using PHMM to investigate sample-specific 3′ UTR shortening could be an added benefit from this emerging technology. PMID:23845781
Some Work and Some Play: Microscopic and Macroscopic Approaches to Labor and Leisure
Niyogi, Ritwik K.; Shizgal, Peter; Dayan, Peter
2014-01-01
Given the option, humans and other animals elect to distribute their time between work and leisure, rather than choosing all of one and none of the other. Traditional accounts of partial allocation have characterised behavior on a macroscopic timescale, reporting and studying the mean times spent in work or leisure. However, averaging over the more microscopic processes that govern choices is known to pose tricky theoretical problems, and also eschews any possibility of direct contact with the neural computations involved. We develop a microscopic framework, formalized as a semi-Markov decision process with possibly stochastic choices, in which subjects approximately maximise their expected returns by making momentary commitments to one or other activity. We show macroscopic utilities that arise from microscopic ones, and demonstrate how facets such as imperfect substitutability can arise in a more straightforward microscopic manner. PMID:25474151
Machine learning in sentiment reconstruction of the simulated stock market
NASA Astrophysics Data System (ADS)
Goykhman, Mikhail; Teimouri, Ali
2018-02-01
In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.
Markov models in dentistry: application to resin-bonded bridges and review of the literature.
Mahl, Dominik; Marinello, Carlo P; Sendi, Pedram
2012-10-01
Markov models are mathematical models that can be used to describe disease progression and evaluate the cost-effectiveness of medical interventions. Markov models allow projecting clinical and economic outcomes into the future and are therefore frequently used to estimate long-term outcomes of medical interventions. The purpose of this paper is to demonstrate its use in dentistry, using the example of resin-bonded bridges to replace missing teeth, and to review the literature. We used literature data and a four-state Markov model to project long-term outcomes of resin-bonded bridges over a time horizon of 60 years. In addition, the literature was searched in PubMed Medline for research articles on the application of Markov models in dentistry.
User Interface Technology for Formal Specification Development
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin
2014-01-01
Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810
Markov switching multinomial logit model: An application to accident-injury severities.
Malyshkina, Nataliya V; Mannering, Fred L
2009-07-01
In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.
The generalization ability of SVM classification based on Markov sampling.
Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang
2015-06-01
The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
Over the past decade, our group has been working to bring coastal ecosystems into integrated basin-lakewide monitoring and assessment strategies for the Great Lakes. We have conducted a wide range of research on coastal tributaries, coastal wetlands, semi-enclosed embayments an...
Experimental method to account for structural compliance in nanoindentation measurements
Joseph E. Jakes; Charles R. Frihart; James F. Beecher; Robert J. Moon; D. S. Stone
2008-01-01
The standard Oliver–Pharr nanoindentation analysis tacitly assumes that the specimen is structurally rigid and that it is both semi-infinite and homogeneous. Many specimens violate these assumptions. We show that when the specimen flexes or possesses heterogeneities, such as free edges or interfaces between regions of different properties, artifacts arise...
Methods to Account for Accelerated Semi-Conductor Device Wearout in Longlife Aerospace Applications
2003-01-01
Vasi, “Device scalling effects on hot-carrier induced interface and oxide-trappoing charge distributions in MOSFETs,” IEEE Transactions on Electron...Symposium Proceedings, pp. 248–254, 2002. [104] S. I. A. ( SIA ), “International technology roadmap for semiconductors.” <www.semichips.org>, 1999. 113
Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude
2006-03-01
The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.
Detection of motor execution using a hybrid fNIRS-biosignal BCI: a feasibility study
2013-01-01
Background Brain-computer interfaces (BCIs) were recently recognized as a method to promote neuroplastic effects in motor rehabilitation. The core of a BCI is a decoding stage by which signals from the brain are classified into different brain-states. The goal of this paper was to test the feasibility of a single trial classifier to detect motor execution based on signals from cortical motor regions, measured by functional near-infrared spectroscopy (fNIRS), and the response of the autonomic nervous system. An approach that allowed for individually tuned classifier topologies was opted for. This promises to be a first step towards a novel form of active movement therapy that could be operated and controlled by paretic patients. Methods Seven healthy subjects performed repetitions of an isometric finger pinching task, while changes in oxy- and deoxyhemoglobin concentrations were measured in the contralateral primary motor cortex and ventral premotor cortex using fNIRS. Simultaneously, heart rate, breathing rate, blood pressure and skin conductance response were measured. Hidden Markov models (HMM) were used to classify between active isometric pinching phases and rest. The classification performance (accuracy, sensitivity and specificity) was assessed for two types of input data: (i) fNIRS-signals only and (ii) fNIRS- and biosignals combined. Results fNIRS data were classified with an average accuracy of 79.4%, which increased significantly to 88.5% when biosignals were also included (p=0.02). Comparable increases were observed for the sensitivity (from 78.3% to 87.2%, p=0.008) and specificity (from 80.5% to 89.9%, p=0.062). Conclusions This study showed, for the first time, promising classification results with hemodynamic fNIRS data obtained from motor regions and simultaneously acquired biosignals. Combining fNIRS data with biosignals has a beneficial effect, opening new avenues for the development of brain-body-computer interfaces for rehabilitation applications. Further research is required to identify the contribution of each modality to the decoding capability of the subject’s hemodynamic and physiological state. PMID:23336819
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
A method for using real world data in breast cancer modeling.
Pobiruchin, Monika; Bochum, Sylvia; Martens, Uwe M; Kieser, Meinhard; Schramm, Wendelin
2016-04-01
Today, hospitals and other health care-related institutions are accumulating a growing bulk of real world clinical data. Such data offer new possibilities for the generation of disease models for the health economic evaluation. In this article, we propose a new approach to leverage cancer registry data for the development of Markov models. Records of breast cancer patients from a clinical cancer registry were used to construct a real world data driven disease model. We describe a model generation process which maps database structures to disease state definitions based on medical expert knowledge. Software was programmed in Java to automatically derive a model structure and transition probabilities. We illustrate our method with the reconstruction of a published breast cancer reference model derived primarily from clinical study data. In doing so, we exported longitudinal patient data from a clinical cancer registry covering eight years. The patient cohort (n=892) comprised HER2-positive and HER2-negative women treated with or without Trastuzumab. The models generated with this method for the respective patient cohorts were comparable to the reference model in their structure and treatment effects. However, our computed disease models reflect a more detailed picture of the transition probabilities, especially for disease free survival and recurrence. Our work presents an approach to extract Markov models semi-automatically using real world data from a clinical cancer registry. Health care decision makers may benefit from more realistic disease models to improve health care-related planning and actions based on their own data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Villar, M.; Garnier, C.; Chabert, F.; Nassiet, V.; Samélor, D.; Diez, J. C.; Sotelo, A.; Madre, M. A.
2018-07-01
The temperature field along the thickness of the specimens has been measured during transmission laser welding. Polyetherketoneketone (PEKK) is a very high performance thermoplastic with tunable properties. We have shown that this grade of PEKK can be turned to quasi-amorphous or semi-crystalline material, due to its slow kinetics of crystallization. Its glass transition temperature is 150 °C. The effect of its crystalline rate directly impacts its optical properties: the transmittance of quasi-amorphous PEKK is about 60% in the NIR region (wavelength range from 0.4 to 1.2 μm) whereas it is less than 3% for the semi-crystalline material. The welding tests have been carried out with an 808 nm laser diode apparatus. The heat field is recorded during the welding experiment by infrared thermography with the camera sensor perpendicular to the lasersheet and to the sample's length to focus on the welded interface. The study is divided in two steps: firstly, a single specimen is irradiated with an energy density of 22 J.mm-²: the whole sample thickness is heated up, the maximum temperature reaches 222 ± 7 °C. This temperature corresponds to about Tg + 70 °C, but the polymer does not reach its melting temperature. After that, welding tests were performed: a transparent (quasi-amorphous) sample as the upper part and an opaque (semi-crystalline) one as the lower part were assembled in static conditions. The maximum temperature reached at the welded interface is about 295 °C when the upper specimen is irradiated for 16 s with an energy density of 28 J.mm-². The temperature at the welded interface stays above Tg during 55 s and reached the melting temperature during 5 s before rapid cooling. These parameters are suitable to assemble both polymeric parts in a strong weld. This work shows that infrared thermography is an appropriate technique to improve the reliability of laser welding process of high performance thermoplastics.
2012-01-01
Background Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license. PMID:23131050
Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan
2012-11-07
Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license.
NASA Astrophysics Data System (ADS)
Peng, Jianping
The SiO_2-Si system has been the subject of extensive study for several decades. Particular interest has been paid to the interface between Si single crystal and the amorphous SiO_2 which determines the properties and performances of devices. This is significant because of the importance of Si technology in the semiconductor industry. The development of the high-intensity slow positron beam at Brookhaven National Laboratory make it possible to study this system for the first time using the positron two-dimensional angular correlation of annihilation radiation (2D-ACAR) technique. 2D-ACAR is a well established and is a non-destructive microscopic probe for studying the electronic structure of materials, and for doing the depth-resolved measurements. Some unique information was obtained from the measurements performed on the SiO_2-Si system: Positronium (Ps) atoms formation and trapping in microvoids in both oxide and interface regions; and positron annihilation at vacancy-like defects in the interface region which can be attributed to the famous Pb centers. The discovery of the microvoids in the interface region may have some impact on the fabrication of the next generation electronic devices. Using the conventional 2D-ACAR setup with a ^{22}Na as positron source, we also studied the native arsenic (As) vacancy in the semi -insulating gallium-arsenide (SI-GaAs), coupled with in situ infrared light illumination. The defect spectrum was obtained by comparing the spectrum taken without photo -illumination to the spectrum taken with photo-illumination. The photo-illumination excited electrons from valence band to the defect level so that positrons can become localized in the defects. The two experiments may represent a new direction of the application of positron 2D-ACAR technique on the solid state physics and materials sciences.
NASA Astrophysics Data System (ADS)
Carbeck, Jeffrey; Petit, Cecilia
2004-03-01
Current efforts in nanotechnology use one of two basic approaches: top-down fabrication and bottom-up assembly. Top-down strategies use lithography and contact printing to create patterned surfaces and microfluidic channels that, in turn, can corral and organize nanoscale structures. Bottom-up approaches use templates to direct the assembly of atoms, molecules, and nanoparticles through molecular recognition. The goal of this work is to integrate these strategies by first patterning and orienting DNA molecules through top-down tools so that single DNA chains can then serve as templates for the bottom-up construction of hetero-structures composed of proteins and nanoparticles, both metallic and semi-conducting. The first part of this talk focuses on the top-down strategies used to create microscopic patterns of stretched and aligned molecules of DNA. Specifically, it presents a new method in which molecular combing -- a process by which molecules are deposited and stretched onto a surface by the passage of an air-water interface -- is performed in microchannels. This approach demonstrates that the shape and motion of this interface serve as an effective local field directing the chains dynamically as they are stretched onto the surface. The geometry of the microchannel directs the placement of the DNA molecules, while the geometry of the air-water interface directs the local orientation and curvature of the molecules. This ability to control both the placement and orientation of chains has implication for the use of this technique in genetic analysis and in the bottom up approach to nanofabrication.The second half of this talk presents our bottom-up strategy, which allows placement of nanoparticles along individual DNA chains with a theoretical resolution of less than 1 nm. Specifically, we demonstrate the sequence-specific patterning of nanoparticles via the hybridization of functionalized complementary probes to surface-bound chains of double-stranded DNA. Using this technique, we demonstrate the ability to assemble metals, semiconductors, and a composite of both on a single molecule.
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
Using Games to Teach Markov Chains
ERIC Educational Resources Information Center
Johnson, Roger W.
2003-01-01
Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…
Sampling rare fluctuations of discrete-time Markov chains
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
Sampling rare fluctuations of discrete-time Markov chains.
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
NASA Astrophysics Data System (ADS)
Jamaluddin, Fadhilah; Rahim, Rahela Abdul
2015-12-01
Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.
Decentralized learning in Markov games.
Vrancx, Peter; Verbeeck, Katja; Nowé, Ann
2008-08-01
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
NonMarkov Ito Processes with 1- state memory
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2010-08-01
A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.
Standfield, L; Comans, T; Raymer, M; O'Leary, S; Moretto, N; Scuffham, P
2016-08-01
Hospital outpatient orthopaedic services traditionally rely on medical specialists to assess all new patients to determine appropriate care. This has resulted in significant delays in service provision. In response, Orthopaedic Physiotherapy Screening Clinics and Multidisciplinary Services (OPSC) have been introduced to assess and co-ordinate care for semi- and non-urgent patients. To compare the efficiency of delivering increased semi- and non-urgent orthopaedic outpatient services through: (1) additional OPSC services; (2) additional traditional orthopaedic medical services with added surgical resources (TOMS + Surg); or (3) additional TOMS without added surgical resources (TOMS - Surg). A cost-utility analysis using discrete event simulation (DES) with dynamic queuing (DQ) was used to predict the cost effectiveness, throughput, queuing times, and resource utilisation, associated with introducing additional OPSC or TOMS ± Surg versus usual care. The introduction of additional OPSC or TOMS (±surgery) would be considered cost effective in Australia. However, OPSC was the most cost-effective option. Increasing the capacity of current OPSC services is an efficient way to improve patient throughput and waiting times without exceeding current surgical resources. An OPSC capacity increase of ~100 patients per month appears cost effective (A$8546 per quality-adjusted life-year) and results in a high level of OPSC utilisation (98 %). Increasing OPSC capacity to manage semi- and non-urgent patients would be cost effective, improve throughput, and reduce waiting times without exceeding current surgical resources. Unlike Markov cohort modelling, microsimulation, or DES without DQ, employing DES-DQ in situations where capacity constraints predominate provides valuable additional information beyond cost effectiveness to guide resource allocation decisions.
Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N
2004-01-01
As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted cost per event and resulting CERs were remarkably close in value to one another. Although DRG codes are based on broader categories and contain less information than ICD-9 codes, in practice the choice of whether to use DRGs or ICD-9 codes may have little effect on the CEA results in heterogeneous conditions such as HIV/AIDS.
Khalifa, M B; Weidenhaupt, M; Choulier, L; Chatellier, J; Rauffer-Bruyère, N; Altschuh, D; Vernet, T
2000-01-01
The influence of framework residues belonging to VH and VL modules of antibody molecules on antigen binding remains poorly understood. To investigate the functional role of such residues, we have performed semi-conservative amino acid replacements at the VH-VL interface. This work was carried out with (i) variants of the same antibody and (ii) with antibodies of different specificities (Fab fragments 145P and 1F1h), in order to check if functional effects are additive and/or similar for the two antibodies. Interaction kinetics of Fab mutants with peptide and protein antigens were measured using a BIACORE instrument. The substitutions introduced at the VH-VL interface had no significant effects on k(a) but showed small, significant effects on k(d). Mutations in the VH module affected k(d) not only for the two different antibodies but also for variants of the same antibody. These effects varied both in direction and in magnitude. In the VL module, the double mutation F(L37)L-Q(L38)L, alone or in combination with other mutations, consistently decreased k(d) about two-fold in Fab 145P. Other mutations in the VL module had no effect on k(d) in 145P, but always decreased k(d) in 1F1h. Moreover, in both systems, small-magnitude non-additive effects on k(d) were observed, but affinity variations seemed to be limited by a threshold. When comparing functional effects in antibodies of different specificity, no general rules could be established. In addition, no clear relationship could be pointed out between the nature of the amino acid change and the observed functional effect. Our results show that binding kinetics are affected by alteration of framework residues remote from the binding site, although these effects are unpredictable for most of the studied changes. Copyright 2000 John Wiley & Sons, Ltd.
An online semi-supervised brain-computer interface.
Gu, Zhenghui; Yu, Zhuliang; Shen, Zhifang; Li, Yuanqing
2013-09-01
Practical brain-computer interface (BCI) systems should require only low training effort for the user, and the algorithms used to classify the intent of the user should be computationally efficient. However, due to inter- and intra-subject variations in EEG signal, intermittent training/calibration is often unavoidable. In this paper, we present an online semi-supervised P300 BCI speller system. After a short initial training (around or less than 1 min in our experiments), the system is switched to a mode where the user can input characters through selective attention. In this mode, a self-training least squares support vector machine (LS-SVM) classifier is gradually enhanced in back end with the unlabeled EEG data collected online after every character input. In this way, the classifier is gradually enhanced. Even though the user may experience some errors in input at the beginning due to the small initial training dataset, the accuracy approaches that of fully supervised method in a few minutes. The algorithm based on LS-SVM and its sequential update has low computational complexity; thus, it is suitable for online applications. The effectiveness of the algorithm has been validated through data analysis on BCI Competition III dataset II (P300 speller BCI data). The performance of the online system was evaluated through experimental results on eight healthy subjects, where all of them achieved the spelling accuracy of 85 % or above within an average online semi-supervised learning time of around 3 min.
Semi-automated based ground-truthing GUI for airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Lydic, Rich; Moore, Tim; Trang, Anh; Agarwal, Sanjeev; Tiwari, Spandan
2005-06-01
Over the past several years, an enormous amount of airborne imagery consisting of various formats has been collected and will continue into the future to support airborne mine/minefield detection processes, improve algorithm development, and aid in imaging sensor development. The ground-truthing of imagery is a very essential part of the algorithm development process to help validate the detection performance of the sensor and improving algorithm techniques. The GUI (Graphical User Interface) called SemiTruth was developed using Matlab software incorporating signal processing, image processing, and statistics toolboxes to aid in ground-truthing imagery. The semi-automated ground-truthing GUI is made possible with the current data collection method, that is including UTM/GPS (Universal Transverse Mercator/Global Positioning System) coordinate measurements for the mine target and fiducial locations on the given minefield layout to support in identification of the targets on the raw imagery. This semi-automated ground-truthing effort has developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division, Airborne Application Branch with some support by the University of Missouri-Rolla.
SAIDE: A Semi-Automated Interface for Hydrogen/Deuterium Exchange Mass Spectrometry.
Villar, Maria T; Miller, Danny E; Fenton, Aron W; Artigues, Antonio
2010-01-01
Deuterium/hydrogen exchange in combination with mass spectrometry (DH MS) is a sensitive technique for detection of changes in protein conformation and dynamics. Since temperature, pH and timing control are the key elements for reliable and efficient measurement of hydrogen/deuterium content in proteins and peptides, we have developed a small, semiautomatic interface for deuterium exchange that interfaces the HPLC pumps with a mass spectrometer. This interface is relatively inexpensive to build, and provides efficient temperature and timing control in all stages of enzyme digestion, HPLC separation and mass analysis of the resulting peptides. We have tested this system with a series of standard tryptic peptides reconstituted in a solvent containing increasing concentration of deuterium. Our results demonstrate the use of this interface results in minimal loss of deuterium due to back exchange during HPLC desalting and separation. For peptides reconstituted in a buffer containing 100% deuterium, and assuming that all amide linkages have exchanged hydrogen with deuterium, the maximum loss of deuterium content is only 17% of the label, indicating the loss of only one deuterium molecule per peptide.
SAIDE: A Semi-Automated Interface for Hydrogen/Deuterium Exchange Mass Spectrometry
Villar, Maria T.; Miller, Danny E.; Fenton, Aron W.; Artigues, Antonio
2011-01-01
Deuterium/hydrogen exchange in combination with mass spectrometry (DH MS) is a sensitive technique for detection of changes in protein conformation and dynamics. Since temperature, pH and timing control are the key elements for reliable and efficient measurement of hydrogen/deuterium content in proteins and peptides, we have developed a small, semiautomatic interface for deuterium exchange that interfaces the HPLC pumps with a mass spectrometer. This interface is relatively inexpensive to build, and provides efficient temperature and timing control in all stages of enzyme digestion, HPLC separation and mass analysis of the resulting peptides. We have tested this system with a series of standard tryptic peptides reconstituted in a solvent containing increasing concentration of deuterium. Our results demonstrate the use of this interface results in minimal loss of deuterium due to back exchange during HPLC desalting and separation. For peptides reconstituted in a buffer containing 100% deuterium, and assuming that all amide linkages have exchanged hydrogen with deuterium, the maximum loss of deuterium content is only 17% of the label, indicating the loss of only one deuterium molecule per peptide. PMID:25309638
Ferromagnetism and spin-dependent transport at a complex oxide interface
NASA Astrophysics Data System (ADS)
Ayino, Yilikal; Xu, Peng; Tigre-Lazo, Juan; Yue, Jin; Jalan, Bharat; Pribiag, Vlad S.
2018-03-01
Complex oxide interfaces are a promising platform for studying a wide array of correlated electron phenomena in low dimensions, including magnetism and superconductivity. The microscopic origin of these phenomena in complex oxide interfaces remains an open question. Here we investigate the magnetic properties of semi-insulating NdTi O3/SrTi O3 (NTO/STO) interfaces and present the first millikelvin study of NTO/STO. The magnetoresistance (MR) reveals signatures of local ferromagnetic order and of spin-dependent thermally activated transport, which are described quantitatively by a simple phenomenological model. We discuss possible origins of the interfacial ferromagnetism. In addition, the MR also shows transient hysteretic features on a time scale of ˜10 -100 s . We demonstrate that these are consistent with an extrinsic magnetothermal origin, which may have been misinterpreted in previous reports of magnetism in STO-based oxide interfaces. The existence of these two MR regimes (steady-state and transient) highlights the importance of time-dependent measurements for distinguishing signatures of ferromagnetism from other effects that can produce hysteresis at low temperatures.
Embodiment and Estrangement: Results from a First-in-Human "Intelligent BCI" Trial.
Gilbert, F; Cook, M; O'Brien, T; Illes, J
2017-11-11
While new generations of implantable brain computer interface (BCI) devices are being developed, evidence in the literature about their impact on the patient experience is lagging. In this article, we address this knowledge gap by analysing data from the first-in-human clinical trial to study patients with implanted BCI advisory devices. We explored perceptions of self-change across six patients who volunteered to be implanted with artificially intelligent BCI devices. We used qualitative methodological tools grounded in phenomenology to conduct in-depth, semi-structured interviews. Results show that, on the one hand, BCIs can positively increase a sense of the self and control; on the other hand, they can induce radical distress, feelings of loss of control, and a rupture of patient identity. We conclude by offering suggestions for the proactive creation of preparedness protocols specific to intelligent-predictive and advisory-BCI technologies essential to prevent potential iatrogenic harms.
Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation
NASA Astrophysics Data System (ADS)
Stephenson, Jerry L.; Kapraun, Chris
Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.
Kammerer, Ulrike; Kruse, Andrea; Barrientos, Gabriela; Arck, Petra C; Blois, Sandra M
2008-01-01
Successful mammalian pregnancy relies on the action of sophisticated regulatory mechanisms that allow the fetus (a semi-allograft) to grow and develop in the uterus in spite of being recognized by maternal immune cells. Among several immunocompetent cells present at the maternal fetal interface, dendritic cells (DC) seem to be of particular relevance for pregnancy maintenance given their unique ability to induce both antigen-specific immunity and tolerance. Thus, these cells would be potentially suitable candidates for the regulation of local immune responses within the uterus necessary to meet the difficult task of protecting the mother from infection without compromising fetal survival. Current evidence on decidual DC phenotype and function, and their role in the regulation of the maternal immune system during mouse and human pregnancy are discussed and reviewed herein; highlighting novel DC functions that seem to be of great importance for a successful pregnancy outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy
2012-11-15
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measuremore » of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was Less-Than-Or-Slanted-Equal-To 3 mm (95th percentiles within {+-}4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from -0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of Less-Than-Or-Slanted-Equal-To 5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance.« less
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M.; Tempany, Clare M.; Cormack, Robert A.
2012-01-01
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was ⩽3 mm (95th percentiles within ±4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from −0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of ⩽5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance. PMID:23127078
Estimation of sojourn time in chronic disease screening without data on interval cases.
Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W
2000-03-01
Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.
NASA Astrophysics Data System (ADS)
Nickelsen, Daniel
2017-07-01
The statistics of velocity increments in homogeneous and isotropic turbulence exhibit universal features in the limit of infinite Reynolds numbers. After Kolmogorov’s scaling law from 1941, many turbulence models aim for capturing these universal features, some are known to have an equivalent formulation in terms of Markov processes. We derive the Markov process equivalent to the particularly successful scaling law postulated by She and Leveque. The Markov process is a jump process for velocity increments u(r) in scale r in which the jumps occur randomly but with deterministic width in u. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we derive a diffusion process for u(r) using two properties of the Navier-Stokes equation. This diffusion process already includes Kolmogorov scaling, extended self-similarity and a class of random cascade models. The fluctuation theorem of this Markov process implies a ‘second law’ that puts a loose bound on the multipliers of the random cascade models. This bound explicitly allows for instances of inverse cascades, which are necessary to satisfy the fluctuation theorem. By adding a jump process to the diffusion process, we go beyond Kolmogorov scaling and formulate the most general scaling law for the class of Markov processes having both diffusion and jump parts. This Markov scaling law includes She-Leveque scaling and a scaling law derived by Yakhot.
Spiders and Camels and Sybase! Oh, My!
NASA Astrophysics Data System (ADS)
Barg, Irene; Ferro, Anthony J.; Stobie, Elizabeth
The Hubble Space Telescope NICMOS Guaranteed Time Observers (GTOs) requested a means of sharing point spread function (PSF) observations. Because of the specifics of the instrument, these PSFs are very useful in the analysis of observations and can vary with the conditions on the telescope. The GTOs are geographically diverse, so a centralized processing solution would not work. The individual PSF observations were reduced by different people, at different institutions, using different reduction software. These varied observations had to be combined into a single database and linked to other information as well. The NICMOS software group at the University of Arizona developed a solution based on a World Wide Web (WWW) interface, using Perl/CGI forms to query the submitter about the PSF data to be entered. After some semi-automated sanity checks, using the FTOOLS package, the metadata are then entered into a Sybase relational database system. A user of the system can then query the database, again through a WWW interface, to locate and retrieve PSFs which may match their observations, as well as determine other information regarding the telescope conditions at the time of the observations (e.g., the breathing parameter). This presentation discusses some of the driving forces in the design, problems encountered, and the choices made. The tools used, including Sybase, Perl, FTOOLS, and WWW elements are also discussed.
Natural user interface as a supplement of the holographic Raman tweezers
NASA Astrophysics Data System (ADS)
Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel
2014-09-01
Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.
Asymptotic inference in system identification for the atom maser.
Catana, Catalin; van Horssen, Merlijn; Guta, Madalin
2012-11-28
System identification is closely related to control theory and plays an increasing role in quantum engineering. In the quantum set-up, system identification is usually equated to process tomography, i.e. estimating a channel by probing it repeatedly with different input states. However, for quantum dynamical systems such as quantum Markov processes, it is more natural to consider the estimation based on continuous measurements of the output, with a given input that may be stationary. We address this problem using asymptotic statistics tools, for the specific example of estimating the Rabi frequency of an atom maser. We compute the Fisher information of different measurement processes as well as the quantum Fisher information of the atom maser, and establish the local asymptotic normality of these statistical models. The statistical notions can be expressed in terms of spectral properties of certain deformed Markov generators, and the connection to large deviations is briefly discussed.
Traces of business cycles in credit-rating migrations
Boreiko, Dmitri; Kaniovski, Serguei; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor’s (S&P’s) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities. PMID:28426758
NASA Astrophysics Data System (ADS)
Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin
2017-03-01
In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.
A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2015-10-01
In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.
Traces of business cycles in credit-rating migrations.
Boreiko, Dmitri; Kaniovski, Serguei; Kaniovski, Yuri; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor's (S&P's) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities.
A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2015-01-01
In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature. PMID:27054199
Driving style recognition method using braking characteristics based on hidden Markov model
Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen
2017-01-01
Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Hrcek, Jan; Miller, Scott E; Whitfield, James B; Shima, Hiroshi; Novotny, Vojtech
2013-10-01
The processes maintaining the enormous diversity of herbivore-parasitoid food webs depend on parasitism rate and parasitoid host specificity. The two parameters have to be evaluated in concert to make conclusions about the importance of parasitoids as natural enemies and guide biological control. We document parasitism rate and host specificity in a highly diverse caterpillar-parasitoid food web encompassing 266 species of lepidopteran hosts and 172 species of hymenopteran or dipteran parasitoids from a lowland tropical forest in Papua New Guinea. We found that semi-concealed hosts (leaf rollers and leaf tiers) represented 84% of all caterpillars, suffered a higher parasitism rate than exposed caterpillars (12 vs. 5%) and their parasitoids were also more host specific. Semi-concealed hosts may therefore be generally more amenable to biological control by parasitoids than exposed ones. Parasitoid host specificity was highest in Braconidae, lower in Diptera: Tachinidae, and, unexpectedly, the lowest in Ichneumonidae. This result challenges the long-standing view of low host specificity in caterpillar-attacking Tachinidae and suggests higher suitability of Braconidae and lower suitability of Ichneumonidae for biological control of caterpillars. Semi-concealed hosts and their parasitoids are the largest, yet understudied component of caterpillar-parasitoid food webs. However, they still remain much closer in parasitism patterns to exposed hosts than to what literature reports on fully concealed leaf miners. Specifically, semi-concealed hosts keep an equally low share of idiobionts (2%) as exposed caterpillars.
Open Markov Processes and Reaction Networks
NASA Astrophysics Data System (ADS)
Swistock Pollard, Blake Stephen
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
2015-01-01
We explore anion-induced interface fluctuations near protein–water interfaces using coarse-grained representations of interfaces as proposed by Willard and Chandler (J. Phys. Chem. B2010, 114, 1954−195820055377). We use umbrella sampling molecular dynamics to compute potentials of mean force along a reaction coordinate bridging the state where the anion is fully solvated and one where it is biased via harmonic restraints to remain at the protein–water interface. Specifically, we focus on fluctuations of an interface between water and a hydrophobic region of hydrophobin-II (HFBII), a 71 amino acid residue protein expressed by filamentous fungi and known for its ability to form hydrophobically mediated self-assemblies at interfaces such as a water/air interface. We consider the anions chloride and iodide that have been shown previously by simulations as displaying specific-ion behaviors at aqueous liquid–vapor interfaces. We find that as in the case of a pure liquid–vapor interface, at the hydrophobic protein–water interface, the larger, less charge-dense iodide anion displays a marginal interfacial stability compared with that of the smaller, more charge-dense chloride anion. Furthermore, consistent with the results at aqueous liquid–vapor interfaces, we find that iodide induces larger fluctuations of the protein–water interface than chloride. PMID:24701961
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
Wells, James W; Cowled, Chris J; Darling, David; Guinn, Barbara-Ann; Farzaneh, Farzin; Noble, Alistair; Galea-Lauri, Joanna
2007-12-01
Alloreactive T-cell responses are known to result in the production of large amounts of proinflammatory cytokines capable of activating and maturing dendritic cells (DC). However, it is unclear whether these allogeneic responses could also act as an adjuvant for concurrent antigen-specific responses. To examine effects of simultaneous alloreactive and antigen-specific T-cell responses induced by semi-allogeneic DC. Semi-allogeneic DC were generated from the F(1) progeny of inbred strains of mice (C57BL/6 and C3H, or C57BL/6 and DBA). We directly primed antigen-specific CD8(+) and CD4(+) T-cells from OT-I and OT-II mice, respectively, in the absence of allogeneic responses, in vitro, and in the presence or absence of alloreactivity in vivo. In vitro, semi-allogeneic DC cross-presented ovalbumin (OVA) to naïve CD8(+) OT-I transgenic T-cells, primed naïve CD4(+) OT-II transgenic T-cells and could stimulate strong alloreactive T-cell proliferation in a primary mixed lymphocyte reaction (MLR). In vivo, semi-allogeneic DC migrated efficiently to regional lymph nodes but did not survive there as long as autologous DC. In addition, they were not able to induce cytotoxic T-lymphocyte (CTL) activity to a target peptide, and only weakly stimulated adoptively transferred OT-II cells. The CD4(+) response was unchanged in allo-tolerized mice, indicating that alloreactive T-cell responses could not provide help for concurrently activated antigen-specific responses. In an EL4 tumour-treatment model, vaccination with semi-allogeneic DC/EL4 fusion hybrids, but not allogeneic DC/EL4 hybrids, significantly increased mouse survival. Expression of self-Major histocompatibility complex (MHC) by semi-allogeneic DC can cause the induction of antigen-specific immunity, however, concurrently activated allogeneic bystander responses do not provide helper or adjuvant effects.
Infrared welding process on composite: Effect of interdiffusion at the welding interface
NASA Astrophysics Data System (ADS)
Asseko, André Chateau Akué; Lafranche, Éric; Cosson, Benoît; Schmidt, Fabrice; Le Maoult, Yannick
2016-10-01
In this study, the effects of the welding temperature field developed during the infrared assembly process on the joining properties of glass fibre reinforced polycarbonate/ unreinforced polycarbonate with carbon black were investigated. The temperature field and the contact time govern together the quality of the adhesion at the welding interface. The effect of the semi-transparent glass fibre reinforced polycarbonate composite / unreinforced polycarbonate composite with carbon black interface was quantified in term of quadratic distance of diffusion or diffusion depth through the welding interface. The microstructural characterizations were investigated in order to inspect the welding zones quality and to observe their failure modes. The diffusion theory has then been applied to calculate the variation of the quadratic distance of diffusion versus time at different locations. The complete self-diffusion is supposed occurring only at temperature above the polycarbonate glass transition temperature (140°C) and with a quadratic distance of diffusion superior to the mean square end-to-end distance.
Settling of a sphere through a fluid-fluid interface: influence of the Reynolds number
NASA Astrophysics Data System (ADS)
Pierson, Jean-Lou; Magnaudet, Jacques
2015-11-01
When a particle sediments through a horizontal fluid-fluid interface (a situation frequently encountered in oceanography as well as in coating processes), it often tows a tail of the upper fluid into the lower one. This feature is observed in both inertia- and viscosity-dominated regimes. Nevertheless the tail evolution and the particle motion are found to highly depend on the ratio of the two effects, i.e. on the Reynolds number. In this work we study numerically the settling of a sphere through a horizontal fluid-fluid interface using an Immersed Boundary Method combined with a Volume of Fluid approach. To get some more insight into the underlying physical mechanisms, we combine this computational approach with a semi-analytical description based on the concept of Darwin ''drift'' which allows us to predict the interface evolution, hence the thickness of the film encapsulating the sphere, in the two limits of Stokes flow and potential flow. This work was funded by DGA whose financial support is greatly appreciated.
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
NASA Astrophysics Data System (ADS)
Romînu, Roxana Otilia; Sinescu, Cosmin; Romînu, Mihai; Negrutiu, Meda; Laissue, Philippe; Mihali, Sorin; Cuc, Lavinia; Hughes, Michael; Bradu, Adrian; Podoleanu, Adrian
2008-09-01
Bonding has become a routine procedure in several dental specialties - from prosthodontics to conservative dentistry and even orthodontics. In many of these fields it is important to be able to investigate the bonded interfaces to assess their quality. All currently employed investigative methods are invasive, meaning that samples are destroyed in the testing procedure and cannot be used again. We have investigated the interface between human enamel and bonded ceramic brackets non-invasively, introducing a combination of new investigative methods - optical coherence tomography (OCT) and confocal microscopy (CM). Brackets were conventionally bonded on conditioned buccal surfaces of teeth The bonding was assessed using these methods. Three dimensional reconstructions of the detected material defects were developed using manual and semi-automatic segmentation. The results clearly prove that OCT and CM are useful in orthodontic bonding investigations.
GRACKLE: a chemistry and cooling library for astrophysics
NASA Astrophysics Data System (ADS)
Smith, Britton D.; Bryan, Greg L.; Glover, Simon C. O.; Goldbaum, Nathan J.; Turk, Matthew J.; Regan, John; Wise, John H.; Schive, Hsi-Yu; Abel, Tom; Emerick, Andrew; O'Shea, Brian W.; Anninos, Peter; Hummels, Cameron B.; Khochfar, Sadegh
2017-04-01
We present the GRACKLE chemistry and cooling library for astrophysical simulations and models. GRACKLE provides a treatment of non-equilibrium primordial chemistry and cooling for H, D and He species, including H2 formation on dust grains; tabulated primordial and metal cooling; multiple ultraviolet background models; and support for radiation transfer and arbitrary heat sources. The library has an easily implementable interface for simulation codes written in C, C++ and FORTRAN as well as a PYTHON interface with added convenience functions for semi-analytical models. As an open-source project, GRACKLE provides a community resource for accessing and disseminating astrochemical data and numerical methods. We present the full details of the core functionality, the simulation and PYTHON interfaces, testing infrastructure, performance and range of applicability. GRACKLE is a fully open-source project and new contributions are welcome.
Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.
Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu
2017-10-03
Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.
Minsley, Burke J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.
Fast-slow asymptotics for a Markov chain model of fast sodium current
NASA Astrophysics Data System (ADS)
Starý, Tomáš; Biktashev, Vadim N.
2017-09-01
We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.
Lozenge Tiling Dynamics and Convergence to the Hydrodynamic Equation
NASA Astrophysics Data System (ADS)
Laslier, Benoît; Toninelli, Fabio Lucio
2018-03-01
We study a reversible continuous-time Markov dynamics of a discrete (2 + 1)-dimensional interface. This can be alternatively viewed as a dynamics of lozenge tilings of the {L× L} torus, or as a conservative dynamics for a two-dimensional system of interlaced particles. The particle interlacement constraints imply that the equilibrium measures are far from being product Bernoulli: particle correlations decay like the inverse distance squared and interface height fluctuations behave on large scales like a massless Gaussian field. We consider a particular choice of the transition rates, originally proposed in Luby et al. (SIAM J Comput 31:167-192, 2001): in terms of interlaced particles, a particle jump of length n that preserves the interlacement constraints has rate 1/(2 n). This dynamics presents special features: the average mutual volume between two interface configurations decreases with time (Luby et al. 2001) and a certain one-dimensional projection of the dynamics is described by the heat equation (Wilson in Ann Appl Probab 14:274-325, 2004). In this work we prove a hydrodynamic limit: after a diffusive rescaling of time and space, the height function evolution tends as L\\to∞ to the solution of a non-linear parabolic PDE. The initial profile is assumed to be C 2 differentiable and to contain no "frozen region". The explicit form of the PDE was recently conjectured (Laslier and Toninelli in Ann Henri Poincaré Theor Math Phys 18:2007-2043, 2017) on the basis of local equilibrium considerations. In contrast with the hydrodynamic equation for the Langevin dynamics of the Ginzburg-Landau model (Funaki and Spohn in Commun Math Phys 85:1-36, 1997; Nishikawa in Commun Math Phys 127:205-227, 2003), here the mobility coefficient turns out to be a non-trivial function of the interface slope.
Probabilistic co-adaptive brain-computer interfacing
NASA Astrophysics Data System (ADS)
Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.
2013-12-01
Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing circumstances.
Charge Inversion in semi-permeable membranes
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Sinha, Shayandev; Jing, Haoyuan
Role of semi-permeable membranes like lipid bilayer is ubiquitous in a myriad of physiological and pathological phenomena. Typically, lipid membranes are impermeable to ions and solutes; however, protein channels embedded in the membrane allow the passage of selective, small ions across the membrane enabling the membrane to adopt a semi-permeable nature. This semi-permeability, in turn, leads to electrostatic potential jump across the membrane, leading to effects such as regulation of intracellular calcium, extracellular-vesicle-membrane interactions, etc. In this study, we theoretically demonstrate that this semi-permeable nature may trigger the most remarkable charge inversion (CI) phenomenon in the cytosol-side of the negatively-charged lipid bilayer membrane that are selectively permeable to only positive ions of a given salt. This CI is manifested as the changing of the sign of the electrostatic potential from negative to positive from the membrane-cytosol interface to deep within the cytosol. We study the impact of the parameters such as the concentration of this salt with selectively permeable ions as well as the concentration of an external salt in the development of this CI phenomenon. We anticipate such CI will profoundly influence the interaction of membrane and intra-cellular moieties (e.g., exosome or multi-cellular vesicles) having implications for a host of biophysical processes.
Saccade selection when reward probability is dynamically manipulated using Markov chains
Lovejoy, Lee P.; Krauzlis, Richard J.
2012-01-01
Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200–600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection. PMID:18330552
Saccade selection when reward probability is dynamically manipulated using Markov chains.
Nummela, Samuel U; Lovejoy, Lee P; Krauzlis, Richard J
2008-05-01
Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200-600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection.
Classification of customer lifetime value models using Markov chain
NASA Astrophysics Data System (ADS)
Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi
2017-10-01
A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.
Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.
Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng
2018-01-01
In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Plaisier, Christopher L; Bare, J Christopher; Baliga, Nitin S
2011-07-01
Transcriptome profiling studies have produced staggering numbers of gene co-expression signatures for a variety of biological systems. A significant fraction of these signatures will be partially or fully explained by miRNA-mediated targeted transcript degradation. miRvestigator takes as input lists of co-expressed genes from Caenorhabditis elegans, Drosophila melanogaster, G. gallus, Homo sapiens, Mus musculus or Rattus norvegicus and identifies the specific miRNAs that are likely to bind to 3' un-translated region (UTR) sequences to mediate the observed co-regulation. The novelty of our approach is the miRvestigator hidden Markov model (HMM) algorithm which systematically computes a similarity P-value for each unique miRNA seed sequence from the miRNA database miRBase to an overrepresented sequence motif identified within the 3'-UTR of the query genes. We have made this miRNA discovery tool accessible to the community by integrating our HMM algorithm with a proven algorithm for de novo discovery of miRNA seed sequences and wrapping these algorithms into a user-friendly interface. Additionally, the miRvestigator web server also produces a list of putative miRNA binding sites within 3'-UTRs of the query transcripts to facilitate the design of validation experiments. The miRvestigator is freely available at http://mirvestigator.systemsbiology.net.
ERIC Educational Resources Information Center
Kayser, Brian D.
The fit of educational aspirations of Illinois rural high school youths to 3 related one-parameter mathematical models was investigated. The models used were the continuous-time Markov chain model, the discrete-time Markov chain, and the Poisson distribution. The sample of 635 students responded to questionnaires from 1966 to 1969 as part of an…
The spectral method and the central limit theorem for general Markov chains
NASA Astrophysics Data System (ADS)
Nagaev, S. V.
2017-12-01
We consider Markov chains with an arbitrary phase space and develop a modification of the spectral method that enables us to prove the central limit theorem (CLT) for non-uniformly ergodic Markov chains. The conditions imposed on the transition function are more general than those by Athreya-Ney and Nummelin. Our proof of the CLT is purely analytical.
SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere
NASA Astrophysics Data System (ADS)
Creasey, Peter; Lang, Annika
2018-04-01
SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.
Detecting critical state before phase transition of complex systems by hidden Markov model
NASA Astrophysics Data System (ADS)
Liu, Rui; Chen, Pei; Li, Yongjun; Chen, Luonan
Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e., before-transition state, pre-transition state, and after-transition state, which can be considered as three different Markov processes. Thus, based on this dynamical feature, we present a novel computational method, i.e., hidden Markov model (HMM), to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e., the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin, and HCV-induced dysplasia and hepatocellular carcinoma.
NASA Astrophysics Data System (ADS)
Orlić, Ivica; Mekterović, Darko; Mekterović, Igor; Ivošević, Tatjana
2015-11-01
VIBA-Lab is a computer program originally developed by the author and co-workers at the National University of Singapore (NUS) as an interactive software package for simulation of Particle Induced X-ray Emission and Rutherford Backscattering Spectra. The original program is redeveloped to a VIBA-Lab 3.0 in which the user can perform semi-quantitative analysis by comparing simulated and measured spectra as well as simulate 2D elemental maps for a given 3D sample composition. The latest version has a new and more versatile user interface. It also has the latest data set of fundamental parameters such as Coster-Kronig transition rates, fluorescence yields, mass absorption coefficients and ionization cross sections for K and L lines in a wider energy range than the original program. Our short-term plan is to introduce routine for quantitative analysis for multiple PIXE and XRF excitations. VIBA-Lab is an excellent teaching tool for students and researchers in using PIXE and RBS techniques. At the same time the program helps when planning an experiment and when optimizing experimental parameters such as incident ions, their energy, detector specifications, filters, geometry, etc. By "running" a virtual experiment the user can test various scenarios until the optimal PIXE and BS spectra are obtained and in this way save a lot of expensive machine time.
Formalisms for user interface specification and design
NASA Technical Reports Server (NTRS)
Auernheimer, Brent J.
1989-01-01
The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.
Systems engineering interfaces: A model based approach
NASA Astrophysics Data System (ADS)
Fosse, E.; Delp, C. L.
The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
Exactly solvable model of the two-dimensional electrical double layer.
Samaj, L; Bajnok, Z
2005-12-01
We consider equilibrium statistical mechanics of a simplified model for the ideal conductor electrode in an interface contact with a classical semi-infinite electrolyte, modeled by the two-dimensional Coulomb gas of pointlike unit charges in the stability-against-collapse regime of reduced inverse temperatures 0< or = beta < 2. If there is a potential difference between the bulk interior of the electrolyte and the grounded electrode, the electrolyte region close to the electrode (known as the electrical double layer) carries some nonzero surface charge density. The model is mappable onto an integrable semi-infinite sine-Gordon theory with Dirichlet boundary conditions. The exact form-factor and boundary state information gained from the mapping provide asymptotic forms of the charge and number density profiles of electrolyte particles at large distances from the interface. The result for the asymptotic behavior of the induced electric potential, related to the charge density via the Poisson equation, confirms the validity of the concept of renormalized charge and the corresponding saturation hypothesis. It is documented on the nonperturbative result for the asymptotic density profile at a strictly nonzero beta that the Debye-Hückel beta-->0 limit is a delicate issue.
Geometric modeling of the temporal bone for cochlea implant simulation
NASA Astrophysics Data System (ADS)
Todd, Catherine A.; Naghdy, Fazel; O'Leary, Stephen
2004-05-01
The first stage in the development of a clinically valid surgical simulator for training otologic surgeons in performing cochlea implantation is presented. For this purpose, a geometric model of the temporal bone has been derived from a cadaver specimen using the biomedical image processing software package Analyze (AnalyzeDirect, Inc) and its three-dimensional reconstruction is examined. Simulator construction begins with registration and processing of a Computer Tomography (CT) medical image sequence. Important anatomical structures of the middle and inner ear are identified and segmented from each scan in a semi-automated threshold-based approach. Linear interpolation between image slices produces a three-dimensional volume dataset: the geometrical model. Artefacts are effectively eliminated using a semi-automatic seeded region-growing algorithm and unnecessary bony structures are removed. Once validated by an Ear, Nose and Throat (ENT) specialist, the model may be imported into the Reachin Application Programming Interface (API) (Reachin Technologies AB) for visual and haptic rendering associated with a virtual mastoidectomy. Interaction with the model is realized with haptics interfacing, providing the user with accurate torque and force feedback. Electrode array insertion into the cochlea will be introduced in the final stage of design.
PIQMIe: a web server for semi-quantitative proteomics data management and analysis
Kuzniar, Arnold; Kanaar, Roland
2014-01-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. PMID:24861615
PIQMIe: a web server for semi-quantitative proteomics data management and analysis.
Kuzniar, Arnold; Kanaar, Roland
2014-07-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
The PAWS and STEM reliability analysis programs
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Stevenson, Philip H.
1988-01-01
The PAWS and STEM programs are new design/validation tools. These programs provide a flexible, user-friendly, language-based interface for the input of Markov models describing the behavior of fault-tolerant computer systems. These programs produce exact solutions of the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. PAWS uses a Pade approximation as a solution technique; STEM uses a Taylor series as a solution technique. Both programs have the capability to solve numerically stiff models. PAWS and STEM possess complementary properties with regard to their input space; and, an additional strength of these programs is that they accept input compatible with the SURE program. If used in conjunction with SURE, PAWS and STEM provide a powerful suite of programs to analyze the reliability of fault-tolerant computer systems.
Behavioral and Temporal Pattern Detection Within Financial Data With Hidden Information
2012-02-01
probabilistic pattern detector to monitor the pattern. 15. SUBJECT TERMS Runtime verification, Hidden data, Hidden Markov models, Formal specifications...sequences in many other fields besides financial systems [L, TV, LC, LZ ]. Rather, the technique suggested in this paper is positioned as a hybrid...operation of the pattern detector . Section 7 describes the operation of the probabilistic pattern-matching monitor, and section 8 describes three
Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection
2010-05-12
for matching biological spectra across a data base of hyperspectral pathology slides acquires with different instruments in different conditions, as...generalizing wavelets and similar scaling mechanisms. Plain Sight Systems, Inc. -7- Proprietary and Confidential To be specific, let the bi-Markov...remarkably well. Conventional nearest neighbor search , compared with a diffusion search. The data is a pathology slide ,each pixel is a digital
Discriminative Learning with Markov Logic Networks
2009-10-01
Discriminative Learning with Markov Logic Networks Tuyen N. Huynh Department of Computer Sciences University of Texas at Austin Austin, TX 78712...emerging area of research that addresses the problem of learning from noisy structured/relational data. Markov logic networks (MLNs), sets of weighted...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Texas at Austin,Department of Computer
Building a semi-automatic ontology learning and construction system for geosciences
NASA Astrophysics Data System (ADS)
Babaie, H. A.; Sunderraman, R.; Zhu, Y.
2013-12-01
We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using just HTML5 and Javascript, thereby avoiding cumbersome server side coding present in the traditional approaches. The JSON format used in MongoDB is also suitable for storing and querying RDF data. We will store the domain ontologies and associated linked data in JSON/RDF formats. Our Web interface will be built upon the open source and configurable WebProtege ontology editor. We will develop a simplified mobile version of our user interface which will automatically detect the hosting device and adjust the user interface layout to accommodate different screen sizes. We will also use the Semantic Media Wiki that allows the user to store and query the data within the wiki pages. By using HTML 5, JavaScript, and WebGL, we aim to create an interactive, dynamic, and multi-dimensional user interface that presents various geosciences data sets in a natural and intuitive way.
Neuron’s eye view: Inferring features of complex stimuli from neural responses
Chen, Xin; Beck, Jeffrey M.
2017-01-01
Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world—contrast and luminance for vision, pitch and intensity for sound—and assemble a stimulus set that systematically varies along these dimensions. Subsequent analysis of neural responses to these stimuli typically focuses on regression models, with experimenter-controlled features as predictors and spike counts or firing rates as responses. Unfortunately, this approach requires knowledge in advance about the relevant features coded by a given population of neurons. For domains as complex as social interaction or natural movement, however, the relevant feature space is poorly understood, and an arbitrary a priori choice of features may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the stimulus, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling neural activity as driven by multiple simultaneous stimulus features rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to cluster neural responses and demonstrate successful recovery of features corresponding to monkeys and faces in the image set. PMID:28827790
Passot, Sixtine; Moreno-Ortega, Beatriz; Moukouanga, Daniel; Balsera, Crispulo; Guyomarc'h, Soazig; Lucas, Mikael; Lobet, Guillaume; Laplaze, Laurent; Muller, Bertrand; Guédon, Yann
2018-05-11
Recent progress in root phenotyping has focused mainly on increasing throughput for genetic studies while identifying root developmental patterns has been comparatively underexplored. We introduce a new phenotyping pipeline for producing high-quality spatio-temporal root system development data and identifying developmental patterns within these data. The SmartRoot image analysis system and temporal and spatial statistical models were applied to two cereals, pearl millet (Pennisetum glaucum) and maize (Zea mays). Semi-Markov switching linear models were used to cluster lateral roots based on their growth rate profiles. These models revealed three types of lateral roots with similar characteristics in both species. The first type corresponds to fast and accelerating roots, the second to rapidly arrested roots, and the third to an intermediate type where roots cease elongation after a few days. These types of lateral roots were retrieved in different proportions in a maize mutant affected in auxin signaling, while the first most vigorous type was absent in maize plants exposed to severe shading. Moreover, the classification of growth rate profiles was mirrored by a ranking of anatomical traits in pearl millet. Potential dependencies in the succession of lateral root types along the primary root were then analyzed using variable-order Markov chains. The lateral root type was not influenced by the shootward neighbor root type or by the distance from this root. This random branching pattern of primary roots was remarkably conserved, despite the high variability of root systems in both species. Our phenotyping pipeline opens the door to exploring the genetic variability of lateral root developmental patterns. {copyright, serif} 2018 American Society of Plant Biologists. All rights reserved.
Development of a brain MRI-based hidden Markov model for dementia recognition.
Chen, Ying; Pham, Tuan D
2013-01-01
Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia.
An XML-based system for the flexible classification and retrieval of clinical practice guidelines.
Ganslandt, T.; Mueller, M. L.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.
2002-01-01
Beneficial effects of clinical practice guidelines (CPGs) have not yet reached expectations due to limited routine adoption. Electronic distribution and reminder systems have the potential to overcome implementation barriers. Existing electronic CPG repositories like the National Guideline Clearinghouse (NGC) provide individual access but lack standardized computer-readable interfaces necessary for automated guideline retrieval. The aim of this paper was to facilitate automated context-based selection and presentation of CPGs. Using attributes from the NGC classification scheme, an XML-based metadata repository was successfully implemented, providing document storage, classification and retrieval functionality. Semi-automated extraction of attributes was implemented for the import of XML guideline documents using XPath. A hospital information system interface was exemplarily implemented for diagnosis-based guideline invocation. Limitations of the implemented system are discussed and possible future work is outlined. Integration of standardized computer-readable search interfaces into existing CPG repositories is proposed. PMID:12463831
Wang, Xin; Su, Xia; Sun, Wentao; Xie, Yanming; Wang, Yongyan
2011-10-01
In post-marketing study of traditional Chinese medicine (TCM), pharmacoeconomic evaluation has an important applied significance. However, the economic literatures of TCM have been unable to fully and accurately reflect the unique overall outcomes of treatment with TCM. For the special nature of TCM itself, we recommend that Markov model could be introduced into post-marketing pharmacoeconomic evaluation of TCM, and also explore the feasibility of model application. Markov model can extrapolate the study time horizon, suit with effectiveness indicators of TCM, and provide measurable comprehensive outcome. In addition, Markov model can promote the development of TCM quality of life scale and the methodology of post-marketing pharmacoeconomic evaluation.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
Policy Transfer via Markov Logic Networks
NASA Astrophysics Data System (ADS)
Torrey, Lisa; Shavlik, Jude
We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Serra, Maria Victoria
2016-09-12
The research objective of this proposal is the computational modeling of the metal-electrolyte interface purely from first principles. The accurate calculation of the electrostatic potential at electrically biased metal-electrolyte interfaces is a current challenge for periodic “ab-initio” simulations. It is also an essential requisite for predicting the correspondence between the macroscopic voltage and the microscopic interfacial charge distribution in electrochemical fuel cells. This interfacial charge distribution is the result of the chemical bonding between solute and metal atoms, and therefore cannot be accurately calculated with the use of semi-empirical classical force fields. The project aims to study in detail themore » structure and dynamics of aqueous electrolytes at metallic interfaces taking into account the effect of the electrode potential. Another side of the project is to produce an accurate method to simulate the water/metal interface. While both experimental and theoretical surface scientists have made a lot of progress on the understanding and characterization of both atomistic structures and reactions at the solid/vacuum interface, the theoretical description of electrochemical interfaces is still lacking behind. A reason for this is that a complete and accurate first principles description of both the liquid and the metal interfaces is still computationally too expensive and complex, since their characteristics are governed by the explicit atomic and electronic structure built at the interface as a response to environmental conditions. This project will characterize in detail how different theoretical levels of modeling describer the metal/water interface. In particular the role of van der Waals interactions will be carefully analyzed and prescriptions to perform accurate simulations will be produced.« less
Modeling the coupled return-spread high frequency dynamics of large tick assets
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Lillo, Fabrizio
2015-01-01
Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.
Quantum Enhanced Inference in Markov Logic Networks
NASA Astrophysics Data System (ADS)
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
Jia, Chen
2017-09-01
Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.