Identifying a Probabilistic Boolean Threshold Network From Samples.
Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya
2018-04-01
This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.
Generating probabilistic Boolean networks from a prescribed transition probability matrix.
Ching, W-K; Chen, X; Tsing, N-K
2009-11-01
Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.
A Comparison of Two Methods for Boolean Query Relevancy Feedback.
ERIC Educational Resources Information Center
Salton, G.; And Others
1984-01-01
Evaluates and compares two recently proposed automatic methods for relevance feedback of Boolean queries (Dillon method, which uses probabilistic approach as basis, and disjunctive normal form method). Conclusions are drawn concerning the use of effective feedback methods in a Boolean query environment. Nineteen references are included. (EJS)
Computing preimages of Boolean networks.
Klotz, Johannes; Bossert, Martin; Schober, Steffen
2013-01-01
In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
SETS. Set Equation Transformation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worrell, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access throughmore » nullification of sensors in its protection system.« less
Probabilistic Relational Structures and Their Applications
ERIC Educational Resources Information Center
Domotor, Zoltan
The principal objects of the investigation reported were, first, to study qualitative probability relations on Boolean algebras, and secondly, to describe applications in the theories of probability logic, information, automata, and probabilistic measurement. The main contribution of this work is stated in 10 definitions and 20 theorems. The basic…
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde
2015-08-28
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde
2015-01-01
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380
Lähdesmäki, Harri; Hautaniemi, Sampsa; Shmulevich, Ilya; Yli-Harja, Olli
2006-01-01
A significant amount of attention has recently been focused on modeling of gene regulatory networks. Two frequently used large-scale modeling frameworks are Bayesian networks (BNs) and Boolean networks, the latter one being a special case of its recent stochastic extension, probabilistic Boolean networks (PBNs). PBN is a promising model class that generalizes the standard rule-based interactions of Boolean networks into the stochastic setting. Dynamic Bayesian networks (DBNs) is a general and versatile model class that is able to represent complex temporal stochastic processes and has also been proposed as a model for gene regulatory systems. In this paper, we concentrate on these two model classes and demonstrate that PBNs and a certain subclass of DBNs can represent the same joint probability distribution over their common variables. The major benefit of introducing the relationships between the models is that it opens up the possibility of applying the standard tools of DBNs to PBNs and vice versa. Hence, the standard learning tools of DBNs can be applied in the context of PBNs, and the inference methods give a natural way of handling the missing values in PBNs which are often present in gene expression measurements. Conversely, the tools for controlling the stationary behavior of the networks, tools for projecting networks onto sub-networks, and efficient learning schemes can be used for DBNs. In other words, the introduced relationships between the models extend the collection of analysis tools for both model classes. PMID:17415411
NASA Astrophysics Data System (ADS)
Caglar, Mehmet Umut; Pal, Ranadip
2011-03-01
Central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid''. However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of cell level data and probabilistic - nonlinear nature of interactions. Several models widely used to analyze and simulate these types of nonlinear interactions. Stochastic Master Equation (SME) models give probabilistic nature of the interactions in a detailed manner, with a high calculation cost. On the other hand Probabilistic Boolean Network (PBN) models give a coarse scale picture of the stochastic processes, with a less calculation cost. Differential Equation (DE) models give the time evolution of mean values of processes in a highly cost effective way. The understanding of the relations between the predictions of these models is important to understand the reliability of the simulations of genetic regulatory networks. In this work the success of the mapping between SME, PBN and DE models is analyzed and the accuracy and affectivity of the control policies generated by using PBN and DE models is compared.
A Note about Information Science Research.
ERIC Educational Resources Information Center
Salton, Gerard
1985-01-01
Discusses the relationship between information science research and practice and briefly describes current research on 10 topics in information retrieval literature: vector processing retrieval strategy, probabilistic retrieval models, inverted file procedures, relevance feedback, Boolean query formulations, front-end procedures, citation…
Topology of Document Retrieval Systems.
ERIC Educational Resources Information Center
Everett, Daniel M.; Cater, Steven C.
1992-01-01
Explains the use of a topological structure to examine the closeness between documents in retrieval systems and analyzes the topological structure of a vector-space model, a fuzzy-set model, an extended Boolean model, a probabilistic model, and a TIRS (Topological Information Retrieval System) model. Proofs for the results are appended. (17…
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
Recent development and biomedical applications of probabilistic Boolean networks
2013-01-01
Probabilistic Boolean network (PBN) modelling is a semi-quantitative approach widely used for the study of the topology and dynamic aspects of biological systems. The combined use of rule-based representation and probability makes PBN appealing for large-scale modelling of biological networks where degrees of uncertainty need to be considered. A considerable expansion of our knowledge in the field of theoretical research on PBN can be observed over the past few years, with a focus on network inference, network intervention and control. With respect to areas of applications, PBN is mainly used for the study of gene regulatory networks though with an increasing emergence in signal transduction, metabolic, and also physiological networks. At the same time, a number of computational tools, facilitating the modelling and analysis of PBNs, are continuously developed. A concise yet comprehensive review of the state-of-the-art on PBN modelling is offered in this article, including a comparative discussion on PBN versus similar models with respect to concepts and biomedical applications. Due to their many advantages, we consider PBN to stand as a suitable modelling framework for the description and analysis of complex biological systems, ranging from molecular to physiological levels. PMID:23815817
bayesPop: Probabilistic Population Projections
Ševčíková, Hana; Raftery, Adrian E.
2016-01-01
We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects. PMID:28077933
An autocatalytic network model for stock markets
NASA Astrophysics Data System (ADS)
Caetano, Marco Antonio Leonel; Yoneyama, Takashi
2015-02-01
The stock prices of companies with businesses that are closely related within a specific sector of economy might exhibit movement patterns and correlations in their dynamics. The idea in this work is to use the concept of autocatalytic network to model such correlations and patterns in the trends exhibited by the expected returns. The trends are expressed in terms of positive or negative returns within each fixed time interval. The time series derived from these trends is then used to represent the movement patterns by a probabilistic boolean network with transitions modeled as an autocatalytic network. The proposed method might be of value in short term forecasting and identification of dependencies. The method is illustrated with a case study based on four stocks of companies in the field of natural resource and technology.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Bell-Boole Inequality: Nonlocality or Probabilistic Incompatibility of Random Variables?
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2008-06-01
The main aim of this report is to inform the quantum information community about investigations on the problem of probabilistic compatibility of a family of random variables: a possibility to realize such a family on the basis of a single probability measure (to construct a single Kolmogorov probability space). These investigations were started hundred of years ago by J. Boole (who invented Boolean algebras). The complete solution of the problem was obtained by Soviet mathematician Vorobjev in 60th. Surprisingly probabilists and statisticians obtained inequalities for probabilities and correlations among which one can find the famous Bell’s inequality and its generalizations. Such inequalities appeared simply as constraints for probabilistic compatibility. In this framework one can not see a priori any link to such problems as nonlocality and “death of reality” which are typically linked to Bell’s type inequalities in physical literature. We analyze the difference between positions of mathematicians and quantum physicists. In particular, we found that one of the most reasonable explanations of probabilistic incompatibility is mixing in Bell’s type inequalities statistical data from a number of experiments performed under different experimental contexts.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
2015-12-24
Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage
Inference of combinatorial Boolean rules of synergistic gene sets from cancer microarray datasets.
Park, Inho; Lee, Kwang H; Lee, Doheon
2010-06-15
Gene set analysis has become an important tool for the functional interpretation of high-throughput gene expression datasets. Moreover, pattern analyses based on inferred gene set activities of individual samples have shown the ability to identify more robust disease signatures than individual gene-based pattern analyses. Although a number of approaches have been proposed for gene set-based pattern analysis, the combinatorial influence of deregulated gene sets on disease phenotype classification has not been studied sufficiently. We propose a new approach for inferring combinatorial Boolean rules of gene sets for a better understanding of cancer transcriptome and cancer classification. To reduce the search space of the possible Boolean rules, we identify small groups of gene sets that synergistically contribute to the classification of samples into their corresponding phenotypic groups (such as normal and cancer). We then measure the significance of the candidate Boolean rules derived from each group of gene sets; the level of significance is based on the class entropy of the samples selected in accordance with the rules. By applying the present approach to publicly available prostate cancer datasets, we identified 72 significant Boolean rules. Finally, we discuss several identified Boolean rules, such as the rule of glutathione metabolism (down) and prostaglandin synthesis regulation (down), which are consistent with known prostate cancer biology. Scripts written in Python and R are available at http://biosoft.kaist.ac.kr/~ihpark/. The refined gene sets and the full list of the identified Boolean rules are provided in the Supplementary Material. Supplementary data are available at Bioinformatics online.
Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.
Mori, Fumito; Mochizuki, Atsushi
2017-07-14
Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning
ERIC Educational Resources Information Center
Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus
2008-01-01
When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…
Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.
Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides
2010-01-01
We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.
BoolNet--an R package for generation, reconstruction and analysis of Boolean networks.
Müssel, Christoph; Hopfensitz, Martin; Kestler, Hans A
2010-05-15
As the study of information processing in living cells moves from individual pathways to complex regulatory networks, mathematical models and simulation become indispensable tools for analyzing the complex behavior of such networks and can provide deep insights into the functioning of cells. The dynamics of gene expression, for example, can be modeled with Boolean networks (BNs). These are mathematical models of low complexity, but have the advantage of being able to capture essential properties of gene-regulatory networks. However, current implementations of BNs only focus on different sub-aspects of this model and do not allow for a seamless integration into existing preprocessing pipelines. BoolNet efficiently integrates methods for synchronous, asynchronous and probabilistic BNs. This includes reconstructing networks from time series, generating random networks, robustness analysis via perturbation, Markov chain simulations, and identification and visualization of attractors. The package BoolNet is freely available from the R project at http://cran.r-project.org/ or http://www.informatik.uni-ulm.de/ni/mitarbeiter/HKestler/boolnet/ under Artistic License 2.0. hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online.
On spectral techniques in analysis of Boolean networks
NASA Astrophysics Data System (ADS)
Kesseli, Juha; Rämö, Pauli; Yli-Harja, Olli
2005-06-01
In this work we present results that can be used for analysis of Boolean networks. The results utilize Fourier spectra of the functions in the network. An accurate formula is given for Derrida plots of networks of finite size N based on a result on Boolean functions presented in another context. Derrida plots are widely used to examine the stability issues of Boolean networks. For the limit N→∞, we give a computationally simple form that can be used as a good approximation for rather small networks as well. A formula for Derrida plots of random Boolean networks (RBNs) presented earlier in the literature is given an alternative derivation. It is shown that the information contained in the Derrida plot is equal to the average Fourier spectrum of the functions in the network. In the case of random networks the mean Derrida plot can be obtained from the mean spectrum of the functions. The method is applied to real data by using the Boolean functions found in genetic regulatory networks of eukaryotic cells in an earlier study. Conventionally, Derrida plots and stability analysis have been computed with statistical sampling resulting in poorer accuracy.
Sun, Mengyang; Cheng, Xianrui; Socolar, Joshua E S
2013-06-01
A common approach to the modeling of gene regulatory networks is to represent activating or repressing interactions using ordinary differential equations for target gene concentrations that include Hill function dependences on regulator gene concentrations. An alternative formulation represents the same interactions using Boolean logic with time delays associated with each network link. We consider the attractors that emerge from the two types of models in the case of a simple but nontrivial network: a figure-8 network with one positive and one negative feedback loop. We show that the different modeling approaches give rise to the same qualitative set of attractors with the exception of a possible fixed point in the ordinary differential equation model in which concentrations sit at intermediate values. The properties of the attractors are most easily understood from the Boolean perspective, suggesting that time-delay Boolean modeling is a useful tool for understanding the logic of regulatory networks.
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-01-01
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-05-25
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.
On construction of stochastic genetic networks based on gene expression sequences.
Ching, Wai-Ki; Ng, Michael M; Fung, Eric S; Akutsu, Tatsuya
2005-08-01
Reconstruction of genetic regulatory networks from time series data of gene expression patterns is an important research topic in bioinformatics. Probabilistic Boolean Networks (PBNs) have been proposed as an effective model for gene regulatory networks. PBNs are able to cope with uncertainty, corporate rule-based dependencies between genes and discover the sensitivity of genes in their interactions with other genes. However, PBNs are unlikely to use directly in practice because of huge amount of computational cost for obtaining predictors and their corresponding probabilities. In this paper, we propose a multivariate Markov model for approximating PBNs and describing the dynamics of a genetic network for gene expression sequences. The main contribution of the new model is to preserve the strength of PBNs and reduce the complexity of the networks. The number of parameters of our proposed model is O(n2) where n is the number of genes involved. We also develop efficient estimation methods for solving the model parameters. Numerical examples on synthetic data sets and practical yeast data sequences are given to demonstrate the effectiveness of the proposed model.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.
2012-04-01
This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.
Discounting of food, sex, and money.
Holt, Daniel D; Newquist, Matthew H; Smits, Rochelle R; Tiry, Andrew M
2014-06-01
Discounting is a useful framework for understanding choice involving a range of delayed and probabilistic outcomes (e.g., money, food, drugs), but relatively few studies have examined how people discount other commodities (e.g., entertainment, sex). Using a novel discounting task, where the length of a line represented the value of an outcome and was adjusted using a staircase procedure, we replicated previous findings showing that individuals discount delayed and probabilistic outcomes in a manner well described by a hyperbola-like function. In addition, we found strong positive correlations between discounting rates of delayed, but not probabilistic, outcomes. This suggests that discounting of delayed outcomes may be relatively predictable across outcome types but that discounting of probabilistic outcomes may depend more on specific contexts. The generality of delay discounting and potential context dependence of probability discounting may provide important information regarding factors contributing to choice behavior.
NASA Astrophysics Data System (ADS)
Kotb, Amer
2015-06-01
The modeling of all-optical logic XNOR gate is realized by a series combination of XOR and INVERT gates. This Boolean function is simulated by using Mach-Zehnder interferometers (MZIs) utilizing quantum-dots semiconductor optical amplifiers (QDs-SOAs). The study is carried out when the effect of amplified spontaneous emission (ASE) is included. The dependence of the output quality factor ( Q-factor) on signals and QDs-SOAs' parameters is also investigated and discussed. The simulation is conducted under a repetition rate of ˜1 Tb/s.
Hirabayashi, Yasuhiko; Ishii, Tomonori
2013-01-01
To seek the cutoff value of the 28-joint disease activity score using erythrocyte sedimentation rate (DAS28-ESR) that is necessary to achieve remission under the new Boolean-based criteria, we analyzed the data for 285 patients with rheumatoid arthritis registered between May 2008 and November 2009 by the Michinoku Tocilizumab Study Group and observed for 1 year after receiving tocilizumab (TCZ) in real clinical practice. Remission rates under the DAS28-ESR criteria and the Boolean criteria were assessed every 6 months after the first TCZ dose. The DAS28-ESR cutoff value necessary to achieve remission under the new criteria was analyzed by receiver operating characteristic (ROC) analysis. Data were analyzed using last observation carried forward. After 12 months of TCZ use, remission was achieved in 164 patients (57.5 %) by DAS28-ESR and 71 patients (24.9 %) under the new criteria for clinical trials. CRP levels scarcely affected remission rates, and the difference between remission rates defined by DAS28-ESR and by the new criteria was mainly due to patient global assessment (PGA). Improvement of PGA was inversely related to disease duration. ROC analysis revealed that the DAS28-ESR cutoff value necessary to predict remission under the new criteria for clinical trials was 1.54, with a sensitivity of 88.7 %, specificity of 85.5 %, positive predictive value of 67.0 %, and negative predictive value of 95.8 %. A DAS28-ESR cutoff value of 1.54 may be reasonable to predict achievement of remission under the new Boolean-based criteria for clinical trials in patients receiving TCZ.
Fertig, Elana J; Danilova, Ludmila V; Favorov, Alexander V; Ochs, Michael F
2011-01-01
Modeling of signal driven transcriptional reprogramming is critical for understanding of organism development, human disease, and cell biology. Many current modeling techniques discount key features of the biological sub-systems when modeling multiscale, organism-level processes. We present a mechanistic hybrid model, GESSA, which integrates a novel pooled probabilistic Boolean network model of cell signaling and a stochastic simulation of transcription and translation responding to a diffusion model of extracellular signals. We apply the model to simulate the well studied cell fate decision process of the vulval precursor cells (VPCs) in C. elegans, using experimentally derived rate constants wherever possible and shared parameters to avoid overfitting. We demonstrate that GESSA recovers (1) the effects of varying scaffold protein concentration on signal strength, (2) amplification of signals in expression, (3) the relative external ligand concentration in a known geometry, and (4) feedback in biochemical networks. We demonstrate that setting model parameters based on wild-type and LIN-12 loss-of-function mutants in C. elegans leads to correct prediction of a wide variety of mutants including partial penetrance of phenotypes. Moreover, the model is relatively insensitive to parameters, retaining the wild-type phenotype for a wide range of cell signaling rate parameters.
An Automated Design Framework for Multicellular Recombinase Logic.
Guiziou, Sarah; Ulliana, Federico; Moreau, Violaine; Leclere, Michel; Bonnet, Jerome
2018-05-18
Tools to systematically reprogram cellular behavior are crucial to address pressing challenges in manufacturing, environment, or healthcare. Recombinases can very efficiently encode Boolean and history-dependent logic in many species, yet current designs are performed on a case-by-case basis, limiting their scalability and requiring time-consuming optimization. Here we present an automated workflow for designing recombinase logic devices executing Boolean functions. Our theoretical framework uses a reduced library of computational devices distributed into different cellular subpopulations, which are then composed in various manners to implement all desired logic functions at the multicellular level. Our design platform called CALIN (Composable Asynchronous Logic using Integrase Networks) is broadly accessible via a web server, taking truth tables as inputs and providing corresponding DNA designs and sequences as outputs (available at http://synbio.cbs.cnrs.fr/calin ). We anticipate that this automated design workflow will streamline the implementation of Boolean functions in many organisms and for various applications.
E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, J.; Flach, G.
A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.
Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas
2013-01-01
The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.
NASA Astrophysics Data System (ADS)
Thakar, Juilee; Albert, Réka
The following sections are included: * Introduction * Boolean Network Concepts and History * Extensions of the Classical Boolean Framework * Boolean Inference Methods and Examples in Biology * Dynamic Boolean Models: Examples in Plant Biology, Developmental Biology and Immunology * Conclusions * References
Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph
2018-01-01
A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.
State feedback control design for Boolean networks.
Liu, Rongjie; Qian, Chunjiang; Liu, Shuqian; Jin, Yu-Fang
2016-08-26
Driving Boolean networks to desired states is of paramount significance toward our ultimate goal of controlling the progression of biological pathways and regulatory networks. Despite recent computational development of controllability of general complex networks and structural controllability of Boolean networks, there is still a lack of bridging the mathematical condition on controllability to real boolean operations in a network. Further, no realtime control strategy has been proposed to drive a Boolean network. In this study, we applied semi-tensor product to represent boolean functions in a network and explored controllability of a boolean network based on the transition matrix and time transition diagram. We determined the necessary and sufficient condition for a controllable Boolean network and mapped this requirement in transition matrix to real boolean functions and structure property of a network. An efficient tool is offered to assess controllability of an arbitrary Boolean network and to determine all reachable and non-reachable states. We found six simplest forms of controllable 2-node Boolean networks and explored the consistency of transition matrices while extending these six forms to controllable networks with more nodes. Importantly, we proposed the first state feedback control strategy to drive the network based on the status of all nodes in the network. Finally, we applied our reachability condition to the major switch of P53 pathway to predict the progression of the pathway and validate the prediction with published experimental results. This control strategy allowed us to apply realtime control to drive Boolean networks, which could not be achieved by the current control strategy for Boolean networks. Our results enabled a more comprehensive understanding of the evolution of Boolean networks and might be extended to output feedback control design.
ERIC Educational Resources Information Center
Lowe, M. Sara; Maxson, Bronwen K.; Stone, Sean M.; Miller, Willie; Snajdr, Eric; Hanna, Kathleen
2018-01-01
Boolean logic can be a difficult concept for first-year, introductory students to grasp. This paper compares the results of Boolean and natural language searching across several databases with searches created from student research questions. Performance differences between databases varied. Overall, natural search language is at least as good as…
An International Systematic Review of Smoking Prevalence in Addiction Treatment
Guydish, Joseph; Passalacqua, Emma; Pagano, Anna; Martínez, Cristina; Le, Thao; Chun, JongSerl; Tajima, Barbara; Docto, Lindsay; Garina, Daria; Delucchi, Kevin
2016-01-01
Aims Smoking prevalence is higher among persons enrolled in addiction treatment as compared to the general population, and very high rates of smoking are associated with opiate drug use and receipt of opiate replacement therapy (ORT). We assessed whether these findings are observed internationally. Methods PubMed, PsycINFO and the Alcohol and Alcohol Problems Science Database were searched for papers reporting smoking prevalence among addiction treatment samples, published in English, from 1987 to 2013. Search terms included tobacco use, cessation, and substance use disorders using AND/OR Boolean connectors. For 4,549 papers identified, abstracts were reviewed by multiple raters. 239 abstracts met inclusion criteria and these full papers were reviewed for exclusion. 54 studies, collectively including 37,364 participants, were included. For each paper we extracted country, author, year, sample size and gender, treatment modality, primary drug treated, and smoking prevalence. Results The random-effect pooled estimate of smoking across persons in addiction treatment was 84% (CI 79%, 88%), while the pooled estimate of smoking prevalence across matched population samples was 31% (CI 29%, 33%). The difference in the pooled estimates was 52% (CI 48%, 57%, p < .0001). Smoking rates were higher in programs treating opiate use as compared to alcohol use (OR = 2.52, CI 2.00, 3.17), and higher in ORT compared to outpatient programs (OR = 1.42, CI 1.19, 1.68). Conclusions Smoking rates among people in addiction treatment are more than double those of people with similar demographic characteristics. Smoking rates are also higher in people being treated for opiate dependence compared with people being treated for alcohol use disorder. PMID:26392127
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
NASA Astrophysics Data System (ADS)
Gerd, Niestegge
2010-12-01
In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.
Kaneko, Yuko; Kondo, Harumi; Takeuchi, Tsutomu
2013-08-01
To investigate the performance of the new remission criteria for rheumatoid arthritis (RA) in daily clinical practice and the effect of possible misclassification of remission when 44 joints are assessed. Disease activity and remission rate were calculated according to the Disease Activity Score (DAS28), Simplified Disease Activity Index (SDAI), Clinical Disease Activity Index (CDAI), and a Boolean-based definition for 1402 patients with RA in Keio University Hospital. Characteristics of patients in remission were investigated, and the number of misclassified patients was determined--those classified as being in remission based on 28-joint count but as nonremission based on a 44-joint count for each definition criterion. Of all patients analyzed, 46.6%, 45.9%, 41.0%, and 31.5% were classified as in remission in the DAS28, SDAI, CDAI, and Boolean definitions, respectively. Patients classified into remission based only on the DAS28 showed relatively low erythrocyte sedimentation rates but greater swollen joint counts than those classified into remission based on the other definitions. In patients classified into remission based only on the Boolean criteria, the mean physician global assessment was greater than the mean patient global assessment. Although 119 patients had ≤ 1 involved joint in the 28-joint count but > 1 in the 44-joint count, only 34 of these 119 (2.4% of all subjects) were found to have been misclassified into remission. In practice, about half of patients with RA can achieve clinical remission within the DAS28, SDAI, and CDAI; and one-third according to the Boolean-based definition. Patients classified in remission based on a 28-joint count may have pain and swelling in the feet, but misclassification of remission was relatively rare and was seen in only 2.4% of patients under a Boolean definition. The 28-joint count can be sufficient for assessing clinical remission based on the new remission criteria.
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
[Methodological design of the National Health and Nutrition Survey 2016].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Cuevas-Nasu, Lucía; Gómez-Humarán, Ignacio Méndez; Gaona-Pineda, Elsa Berenice; Gómez-Acosta, Luz María; Rivera-Dommarco, Juan Ángel; Hernández-Ávila, Mauricio
2017-01-01
Describe the design methodology of the halfway health and nutrition national survey (Ensanut-MC) 2016. The Ensanut-MC is a national probabilistic survey whose objective population are the inhabitants of private households in Mexico. The sample size was determined to make inferences on the urban and rural areas in four regions. Describes main design elements: target population, topics of study, sampling procedure, measurement procedure and logistics organization. A final sample of 9 479 completed household interviews, and a sample of 16 591 individual interviews. The response rate for households was 77.9%, and the response rate for individuals was 91.9%. The Ensanut-MC probabilistic design allows valid statistical inferences about interest parameters for Mexico´s public health and nutrition, specifically on overweight, obesity and diabetes mellitus. Updated information also supports the monitoring, updating and formulation of new policies and priority programs.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.
Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent
2016-08-01
Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.
Cryptographic Boolean Functions with Biased Inputs
2015-07-31
theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the
Huan, Zhibo; Xu, Zhi; Luo, Jinhui; Xie, Defang
2016-11-01
Residues of 14 pesticides were determined in 150 cowpea samples collected in five southern Chinese provinces in 2013 and 2014.70% samples were detected one or more residues. 61.3% samples were illegal mainly because of detection of unauthorized pesticides. 14.0% samples contained more than three pesticides. Deterministic and probabilistic methods were used to assess the chronic and acute risk of pesticides in cowpea to eight subgroups of people. Deterministic assessment showed that the estimated short-term intakes (ESTIs) of carbofuran were 1199.4%-2621.9% of the acute reference doses (ARfD) while the rates were 985.9%-4114.7% using probabilistic assessment. Probabilistic assessment showed 4.2%-7.8% subjects may suffer from unacceptable acute risk from carbofuran contaminated cowpeas from the five provinces (especially children). But undue concern is not necessary, because all the estimations are based on conservative assumption. Copyright © 2016 Elsevier Inc. All rights reserved.
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography.
Picek, Stjepan; Carlet, Claude; Guilley, Sylvain; Miller, Julian F; Jakobovic, Domagoj
2016-01-01
The role of Boolean functions is prominent in several areas including cryptography, sequences, and coding theory. Therefore, various methods for the construction of Boolean functions with desired properties are of direct interest. New motivations on the role of Boolean functions in cryptography with attendant new properties have emerged over the years. There are still many combinations of design criteria left unexplored and in this matter evolutionary computation can play a distinct role. This article concentrates on two scenarios for the use of Boolean functions in cryptography. The first uses Boolean functions as the source of the nonlinearity in filter and combiner generators. Although relatively well explored using evolutionary algorithms, it still presents an interesting goal in terms of the practical sizes of Boolean functions. The second scenario appeared rather recently where the objective is to find Boolean functions that have various orders of the correlation immunity and minimal Hamming weight. In both these scenarios we see that evolutionary algorithms are able to find high-quality solutions where genetic programming performs the best.
Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems
NASA Astrophysics Data System (ADS)
Amelard, Robert; Clausi, David A.; Wong, Alexander
2016-11-01
Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m-2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-08-29
Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm) on the Boolean state space. This software, parallelized and optimized, computes the temporal evolution of probability distributions and estimates stationary distributions. Applications of the Boolean Kinetic Monte-Carlo are demonstrated for three qualitative models: a toy model, a published model of p53/Mdm2 interaction and a published model of the mammalian cell cycle. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
A Simple Blueprint for Automatic Boolean Query Processing.
ERIC Educational Resources Information Center
Salton, G.
1988-01-01
Describes a new Boolean retrieval environment in which an extended soft Boolean logic is used to automatically construct queries from original natural language formulations provided by users. Experimental results that compare the retrieval effectiveness of this method to conventional Boolean and vector processing are discussed. (27 references)…
Inferring Toxicological Responses of HepG2 Cells from ...
Understanding the dynamic perturbation of cell states by chemicals can aid in for predicting their adverse effects. High-content imaging (HCI) was used to measure the state of HepG2 cells over three time points (1, 24, and 72 h) in response to 976 ToxCast chemicals for 10 different concentrations (0.39-200µM). Cell state was characterized by p53 activation (p53), c-Jun activation (SK), phospho-Histone H2A.x (OS), phospho-Histone H3 (MA), alpha tubulin (Mt), mitochondrial membrane potential (MMP), mitochondrial mass (MM), cell cycle arrest (CCA), nuclear size (NS) and cell number (CN). Dynamic cell state perturbations due to each chemical concentration were utilized to infer coarse-grained dependencies between cellular functions as Boolean networks (BNs). BNs were inferred from data in two steps. First, the data for each state variable were discretized into changed/active (> 1 standard deviation), and unchanged/inactive values. Second, the discretized data were used to learn Boolean relationships between variables. In our case, a BN is a wiring diagram between nodes that represent 10 previously described observable phenotypes. Functional relationships between nodes were represented as Boolean functions. We found that inferred BN show that HepG2 cell response is chemical and concentration specific. We observed presence of both point and cycle BN attractors. In addition, there are instances where Boolean functions were not found. We believe that this may be either
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.
Hsu, Anne; Griffiths, Thomas L
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576
Wang, Guan-Ying; Zhang, Sa-Li; Wang, Xiu-Ru; Feng, Min; Li, Chun; An, Yuan; Li, Xiao-Feng; Wang, Li-Zhi; Wang, Cai-Hong; Wang, Yong-Fu; Yang, Rong; Yan, Hui-Ming; Wang, Guo-Chun; Lu, Xin; Liu, Xia; Zhu, Ping; Chen, Li-Na; Jin, Hong-Tao; Liu, Jin-Ting; Guo, Hui-Fang; Chen, Hai-Ying; Xie, Jian-Li; Wei, Ping; Wang, Jun-Xiang; Liu, Xiang-Yuan; Sun, Lin; Cui, Liu-Fu; Shu, Rong; Liu, Bai-Lu; Yu, Ping; Zhang, Zhuo-Li; Li, Guang-Tao; Li, Zhen-Bin; Yang, Jing; Li, Jun-Fang; Jia, Bin; Zhang, Feng-Xiao; Tao, Jie-Mei; Lin, Jin-Ying; Wei, Mei-Qiu; Liu, Xiao-Min; Ke, Dan; Hu, Shao-Xian; Ye, Cong; Han, Shu-Ling; Yang, Xiu-Yan; Li, Hao; Huang, Ci-Bo; Gao, Ming; Lai, Bei; Cheng, Yong-Jing; Li, Xing-Fu; Song, Li-Jun; Yu, Xiao-Xia; Wang, Ai-Xue; Wu, Li-Jun; Wang, Yan-Hua; He, Lan; Sun, Wen-Wen; Gong, Lu; Wang, Xiao-Yuan; Wang, Yi; Zhao, Yi; Li, Xiao-Xia; Wang, Yan; Zhang, Yan; Su, Yin; Zhang, Chun-Fang; Mu, Rong; Li, Zhan-Guo
2015-02-01
The aim of this study is to investigate the remission rate of rheumatoid arthritis (RA) in China and identify its potential determinants. A multi-center cross-sectional study was conducted from July 2009 to January 2012. Data were collected by face-to-face interviews of the rheumatology outpatients in 28 tertiary hospitals in China. The remission rates were calculated in 486 RA patients according to different definitions of remission: the Disease Activity Score in 28 joints (DAS28), the Simplified Disease Activity Index (SDAI), the Clinical Disease Activity Index (CDAI), and the American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) Boolean definition. Potential determinants of RA remission were assessed by univariate and multivariate analyses. The remission rates of RA from this multi-center cohort were 8.6% (DAS28), 8.4% (SDAI), 8.2% (CDAI), and 6.8% (Boolean), respectively. Favorable factors associated with remission were: low Health Assessment Questionnaire (HAQ) score, absence of rheumatoid factor (RF) and anti-cyclic citrullinated peptide (anti-CCP), and treatment of methotrexate (MTX) and hydroxychloroquine (HCQ). Younger age was also predictive for the DAS28 and the Boolean remission. Multivariate analyses revealed a low HAQ score, the absence of anti-CCP, and the treatment with HCQ as independent determinants of remission. The clinical remission rate of RA patients was low in China. A low HAQ score, the absence of anti-CCP, and HCQ were significant independent determinants for RA remission.
ERIC Educational Resources Information Center
Meiser, Thorsten; Rummel, Jan; Fleig, Hanna
2018-01-01
Pseudocontingencies are inferences about correlations in the environment that are formed on the basis of statistical regularities like skewed base rates or varying base rates across environmental contexts. Previous research has demonstrated that pseudocontingencies provide a pervasive mechanism of inductive inference in numerous social judgment…
Optimal stabilization of Boolean networks through collective influence
NASA Astrophysics Data System (ADS)
Wang, Jiannan; Pei, Sen; Wei, Wei; Feng, Xiangnan; Zheng, Zhiming
2018-03-01
Boolean networks have attracted much attention due to their wide applications in describing dynamics of biological systems. During past decades, much effort has been invested in unveiling how network structure and update rules affect the stability of Boolean networks. In this paper, we aim to identify and control a minimal set of influential nodes that is capable of stabilizing an unstable Boolean network. For locally treelike Boolean networks with biased truth tables, we propose a greedy algorithm to identify influential nodes in Boolean networks by minimizing the largest eigenvalue of a modified nonbacktracking matrix. We test the performance of the proposed collective influence algorithm on four different networks. Results show that the collective influence algorithm can stabilize each network with a smaller set of nodes compared with other heuristic algorithms. Our work provides a new insight into the mechanism that determines the stability of Boolean networks, which may find applications in identifying virulence genes that lead to serious diseases.
NASA Astrophysics Data System (ADS)
Ebadi, H.; Saeedian, M.; Ausloos, M.; Jafari, G. R.
2016-11-01
The Boolean network is one successful model to investigate discrete complex systems such as the gene interacting phenomenon. The dynamics of a Boolean network, controlled with Boolean functions, is usually considered to be a Markovian (memory-less) process. However, both self-organizing features of biological phenomena and their intelligent nature should raise some doubt about ignoring the history of their time evolution. Here, we extend the Boolean network Markovian approach: we involve the effect of memory on the dynamics. This can be explored by modifying Boolean functions into non-Markovian functions, for example, by investigating the usual non-Markovian threshold function —one of the most applied Boolean functions. By applying the non-Markovian threshold function on the dynamical process of the yeast cell cycle network, we discover a power-law-like memory with a more robust dynamics than the Markovian dynamics.
On the Computation of Comprehensive Boolean Gröbner Bases
NASA Astrophysics Data System (ADS)
Inoue, Shutaro
We show that a comprehensive Boolean Gröbner basis of an ideal I in a Boolean polynomial ring B (bar A,bar X) with main variables bar X and parameters bar A can be obtained by simply computing a usual Boolean Gröbner basis of I regarding both bar X and bar A as variables with a certain block term order such that bar X ≫ bar A. The result together with a fact that a finite Boolean ring is isomorphic to a direct product of the Galois field mathbb{GF}_2 enables us to compute a comprehensive Boolean Gröbner basis by only computing corresponding Gröbner bases in a polynomial ring over mathbb{GF}_2. Our implementation in a computer algebra system Risa/Asir shows that our method is extremely efficient comparing with existing computation algorithms of comprehensive Boolean Gröbner bases.
Mining TCGA Data Using Boolean Implications
Sinha, Subarna; Tsang, Emily K.; Zeng, Haoyang; Meister, Michela; Dill, David L.
2014-01-01
Boolean implications (if-then rules) provide a conceptually simple, uniform and highly scalable way to find associations between pairs of random variables. In this paper, we propose to use Boolean implications to find relationships between variables of different data types (mutation, copy number alteration, DNA methylation and gene expression) from the glioblastoma (GBM) and ovarian serous cystadenoma (OV) data sets from The Cancer Genome Atlas (TCGA). We find hundreds of thousands of Boolean implications from these data sets. A direct comparison of the relationships found by Boolean implications and those found by commonly used methods for mining associations show that existing methods would miss relationships found by Boolean implications. Furthermore, many relationships exposed by Boolean implications reflect important aspects of cancer biology. Examples of our findings include cis relationships between copy number alteration, DNA methylation and expression of genes, a new hierarchy of mutations and recurrent copy number alterations, loss-of-heterozygosity of well-known tumor suppressors, and the hypermethylation phenotype associated with IDH1 mutations in GBM. The Boolean implication results used in the paper can be accessed at http://crookneck.stanford.edu/microarray/TCGANetworks/. PMID:25054200
High speed all optical logic gates based on quantum dot semiconductor optical amplifiers.
Ma, Shaozhen; Chen, Zhe; Sun, Hongzhi; Dutta, Niloy K
2010-03-29
A scheme to realize all-optical Boolean logic functions AND, XOR and NOT using semiconductor optical amplifiers with quantum-dot active layers is studied. nonlinear dynamics including carrier heating and spectral hole-burning are taken into account together with the rate equations scheme. Results show with QD excited state and wetting layer serving as dual-reservoir of carriers, as well as the ultra fast carrier relaxation of the QD device, this scheme is suitable for high speed Boolean logic operations. Logic operation can be carried out up to speed of 250 Gb/s.
Synchronization of coupled large-scale Boolean networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Fangfei, E-mail: li-fangfei@163.com
2014-03-15
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Reservoir computing with a single time-delay autonomous Boolean node
NASA Astrophysics Data System (ADS)
Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.
2015-02-01
We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.
Adaptation and survivors in a random Boolean network.
Nakamura, Ikuo
2002-04-01
We introduce the competitive agent with imitation strategy in a random Boolean network, in which the agent plays a competitive game that rewards those in minority. After a long time interval, the worst performer changes its strategy to the one of the best and the process is repeated. The network, initially in a chaotic state, evolves to an intermittent state and finally reaches a frozen state. Time series of survived species (whose strategies are imitated by other agents) in the system depend on the connectivity of each agent. In a system with various connectivity groups, the low connectivity groups win the minority game over the high connectivity groups. We also compared the result with mutation strategy system.
Quintana, M. I.; Bressan, R. A.; Mello, M. F.; Andreoli, S. B.
2015-01-01
Objective. To verify the association between violence and alcohol dependence syndrome in sample populations. Method. Population-wide survey with multistage probabilistic sample. 3,744 individuals of both genders, aged from 15 to 75 years, were interviewed from the cities of São Paulo and Rio de Janeiro using the Composite International Diagnostic Interview (CIDI 2.1). Results. In both cities, alcohol dependence was associated with the male gender, having suffered violence related to criminality, and having suffered familial violence. In both cities, urban violence, in more than 50% of cases, and familial violence, in more than 90% of cases, preceded alcohol dependence. The reoccurrence of traumatic events occurred in more than half of individuals dependent on alcohol. In São Paulo, having been diagnosed with PTSD is associated with violence revictimization (P = 0.014; Odds = 3.33). Conclusion. Alcohol dependence syndrome is complexly related to urban and familial violence in the general population. Violence frequently precedes alcoholism, but this relationship is dependent on residence and traumatic events. This vicious cycle contributes to perpetuating the high rates of alcoholism and violence in the cities. Politicians ordering the reduction of violence in the large metropolises can, potentially, reduce alcoholism and contribute to the break of this cycle. PMID:26000304
Computing with motile bio-agents
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.
2007-12-01
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
Personality disorder traits as predictors of subsequent first-onset panic disorder or agoraphobia
Bienvenu, O. Joseph; Stein, Murray B.; Samuels, Jack F.; Onyike, Chiadi U.; Eaton, William W.; Nestadt, Gerald
2009-01-01
Determining how personality disorder traits and panic disorder and/or agoraphobia relate longitudinally is an important step in developing a comprehensive understanding of the etiology of panic/agoraphobia. In 1981, a probabilistic sample of adult (≥ 18 years old) residents of east Baltimore were assessed for Axis I symptoms and disorders using the Diagnostic Interview Schedule (DIS); psychiatrists re-evaluated a sub-sample of these participants and made Axis I diagnoses, as well as ratings of individual DSM-III personality disorder traits. Of the participants psychiatrists examined in 1981, 432 were assessed again in 1993–1996 using the DIS. Excluding participants who had baseline panic attacks or panic-like spells from the risk groups, baseline timidity (avoidant, dependent, and related traits) predicted first-onset DIS panic disorder or agoraphobia over the follow-up period. These results suggest that avoidant and dependent personality traits are predisposing factors, or at least markers of risk, for panic disorder and agoraphobia - not simply epiphenomena. PMID:19374963
State feedback controller design for the synchronization of Boolean networks with time delays
NASA Astrophysics Data System (ADS)
Li, Fangfei; Li, Jianning; Shen, Lijuan
2018-01-01
State feedback control design to make the response Boolean network synchronize with the drive Boolean network is far from being solved in the literature. Motivated by this, this paper studies the feedback control design for the complete synchronization of two coupled Boolean networks with time delays. A necessary condition for the existence of a state feedback controller is derived first. Then the feedback control design procedure for the complete synchronization of two coupled Boolean networks is provided based on the necessary condition. Finally, an example is given to illustrate the proposed design procedure.
The mathematics of a quantum Hamiltonian computing half adder Boolean logic gate.
Dridi, G; Julien, R; Hliwa, M; Joachim, C
2015-08-28
The mathematics behind the quantum Hamiltonian computing (QHC) approach of designing Boolean logic gates with a quantum system are given. Using the quantum eigenvalue repulsion effect, the QHC AND, NAND, OR, NOR, XOR, and NXOR Hamiltonian Boolean matrices are constructed. This is applied to the construction of a QHC half adder Hamiltonian matrix requiring only six quantum states to fullfil a half Boolean logical truth table. The QHC design rules open a nano-architectronic way of constructing Boolean logic gates inside a single molecule or atom by atom at the surface of a passivated semi-conductor.
Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M
2017-11-25
Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays.
Bayesian Probabilistic Projection of International Migration.
Azose, Jonathan J; Raftery, Adrian E
2015-10-01
We propose a method for obtaining joint probabilistic projections of migration for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the method used in the United Nations' World Population Prospects, and also to a state-of-the-art gravity model.
Development of Boolean calculus and its application
NASA Technical Reports Server (NTRS)
Tapia, M. A.
1979-01-01
Formal procedures for synthesis of asynchronous sequential system using commercially available edge-sensitive flip-flops are developed. Boolean differential is defined. The exact number of compatible integrals of a Boolean differential were calculated.
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.; Tapia, Moiez A.; Bennett, A. Wayne
1988-01-01
The concept of Boolean integration is developed, and different Boolean integral operators are introduced. Given the changes in a desired function in terms of the changes in its arguments, the ways of 'integrating' (i.e. realizing) such a function, if it exists, are presented. The necessary and sufficient conditions for integrating, in different senses, the expression specifying the changes are obtained. Boolean calculus has applications in the design of logic circuits and in fault analysis.
NASA Technical Reports Server (NTRS)
Szallasi, Zoltan; Liang, Shoudan
2000-01-01
In this paper we show how Boolean genetic networks could be used to address complex problems in cancer biology. First, we describe a general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner. Second, we introduce 'realistic Boolean genetic networks' that produce time series measurements very similar to those detected in actual biological systems. Third, we outline a series of essential questions related to cancer biology and cancer therapy that could be addressed by the use of 'realistic Boolean genetic network' modeling.
Automatic query formulations in information retrieval.
Salton, G; Buckley, C; Fox, E A
1983-07-01
Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice.
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
Cabessa, Jérémie; Villa, Alessandro E. P.
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288
Investigating Cell Criticality
NASA Astrophysics Data System (ADS)
Serra, R.; Villani, M.; Damiani, C.; Graudenzi, A.; Ingrami, P.; Colacci, A.
Random Boolean networks provide a way to give a precise meaning to the notion that living beings are in a critical state. Some phenomena which are observed in real biological systems (distribution of "avalanches" in gene knock-out experiments) can be modeled using random Boolean networks, and the results can be analytically proven to depend upon the Derrida parameter, which also determines whether the network is critical. By comparing observed and simulated data one can then draw inferences about the criticality of biological cells, although with some care because of the limited number of experimental observations. The relationship between the criticality of a single network and that of a set of interacting networks, which simulate a tissue or a bacterial colony, is also analyzed by computer simulations.
A Note on a Sampling Theorem for Functions over GF(q)n Domain
NASA Astrophysics Data System (ADS)
Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
Proposed method to construct Boolean functions with maximum possible annihilator immunity
NASA Astrophysics Data System (ADS)
Goyal, Rajni; Panigrahi, Anupama; Bansal, Rohit
2017-07-01
Nonlinearity and Algebraic(annihilator) immunity are two core properties of a Boolean function because optimum values of Annihilator Immunity and nonlinearity are required to resist fast algebraic attack and differential cryptanalysis respectively. For a secure cypher system, Boolean function(S-Boxes) should resist maximum number of attacks. It is possible if a Boolean function has optimal trade-off among its properties. Before constructing Boolean functions, we fixed the criteria of our constructions based on its properties. In present work, our construction is based on annihilator immunity and nonlinearity. While keeping above facts in mind,, we have developed a multi-objective evolutionary approach based on NSGA-II and got the optimum value of annihilator immunity with good bound of nonlinearity. We have constructed balanced Boolean functions having the best trade-off among balancedness, Annihilator immunity and nonlinearity for 5, 6 and 7 variables by the proposed method.
When Is Rapid On-Site Evaluation Cost-Effective for Fine-Needle Aspiration Biopsy?
Schmidt, Robert L.; Walker, Brandon S.; Cohen, Michael B.
2015-01-01
Background Rapid on-site evaluation (ROSE) can improve adequacy rates of fine-needle aspiration biopsy (FNAB) but increases operational costs. The performance of ROSE relative to fixed sampling depends on many factors. It is not clear when ROSE is less costly than sampling with a fixed number of needle passes. The objective of this study was to determine the conditions under which ROSE is less costly than fixed sampling. Methods Cost comparison of sampling with and without ROSE using mathematical modeling. Models were based on a societal perspective and used a mechanistic, micro-costing approach. Sampling policies (ROSE, fixed) were compared using the difference in total expected costs per case. Scenarios were based on procedure complexity (palpation-guided or image-guided), adequacy rates (low, high) and sampling protocols (stopping criteria for ROSE and fixed sampling). One-way, probabilistic, and scenario-based sensitivity analysis was performed to determine which variables had the greatest influence on the cost difference. Results ROSE is favored relative to fixed sampling under the following conditions: (1) the cytologist is accurate, (2) the total variable cost ($/hr) is low, (3) fixed costs ($/procedure) are high, (4) the setup time is long, (5) the time between needle passes for ROSE is low, (6) when the per-pass adequacy rate is low, and (7) ROSE stops after observing one adequate sample. The model is most sensitive to variation in the fixed cost, the per-pass adequacy rate, and the time per needle pass with ROSE. Conclusions Mathematical modeling can be used to predict the difference in cost between sampling with and without ROSE. PMID:26317785
INM Integrated Noise Model Version 2. Programmer’s Guide
1979-09-01
cost, turnaround time, and system-dependent limitations. 3.2 CONVERSION PROBLEMS Item Item Item No. Desciption Category 1 BLOCK DATA Initialization IBM ...Restricted 2 Boolean Operations Differences Call Statement Parameters Extensions 4 Data Initialization IBM Restricted 5 ENTRY Differences 6 EQUIVALENCE...Machine Dependent 7 Format: A CDC Extension 8 Hollerith Strings IBM Restricted 9 Hollerith Variables IBM Restricted 10 Identifier Names CDC Extension
Stationary and structural control in gene regulatory networks: basic concepts
NASA Astrophysics Data System (ADS)
Dougherty, Edward R.; Pal, Ranadip; Qian, Xiaoning; Bittner, Michael L.; Datta, Aniruddha
2010-01-01
A major reason for constructing gene regulatory networks is to use them as models for determining therapeutic intervention strategies by deriving ways of altering their long-run dynamics in such a way as to reduce the likelihood of entering undesirable states. In general, two paradigms have been taken for gene network intervention: (1) stationary external control is based on optimally altering the status of a control gene (or genes) over time to drive network dynamics; and (2) structural intervention involves an optimal one-time change of the network structure (wiring) to beneficially alter the long-run behaviour of the network. These intervention approaches have mainly been developed within the context of the probabilistic Boolean network model for gene regulation. This article reviews both types of intervention and applies them to reducing the metastatic competence of cells via intervention in a melanoma-related network.
ERIC Educational Resources Information Center
Hildreth, Charles R.
1983-01-01
This editorial addresses the issue of whether or not to provide free-text, keyword/boolean search capabilities in the information retrieval mechanisms of online public access catalogs and discusses online catalogs developed prior to 1980--keyword searching, phrase searching, and precoordination and postcoordination. (EJS)
Minimum energy control and optimal-satisfactory control of Boolean control network
NASA Astrophysics Data System (ADS)
Li, Fangfei; Lu, Xiwen
2013-12-01
In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.
Griffin: A Tool for Symbolic Inference of Synchronous Boolean Molecular Networks.
Muñoz, Stalin; Carrillo, Miguel; Azpeitia, Eugenio; Rosenblueth, David A
2018-01-01
Boolean networks are important models of biochemical systems, located at the high end of the abstraction spectrum. A number of Boolean gene networks have been inferred following essentially the same method. Such a method first considers experimental data for a typically underdetermined "regulation" graph. Next, Boolean networks are inferred by using biological constraints to narrow the search space, such as a desired set of (fixed-point or cyclic) attractors. We describe Griffin , a computer tool enhancing this method. Griffin incorporates a number of well-established algorithms, such as Dubrova and Teslenko's algorithm for finding attractors in synchronous Boolean networks. In addition, a formal definition of regulation allows Griffin to employ "symbolic" techniques, able to represent both large sets of network states and Boolean constraints. We observe that when the set of attractors is required to be an exact set, prohibiting additional attractors, a naive Boolean coding of this constraint may be unfeasible. Such cases may be intractable even with symbolic methods, as the number of Boolean constraints may be astronomically large. To overcome this problem, we employ an Artificial Intelligence technique known as "clause learning" considerably increasing Griffin 's scalability. Without clause learning only toy examples prohibiting additional attractors are solvable: only one out of seven queries reported here is answered. With clause learning, by contrast, all seven queries are answered. We illustrate Griffin with three case studies drawn from the Arabidopsis thaliana literature. Griffin is available at: http://turing.iimas.unam.mx/griffin.
On the Run-Time Optimization of the Boolean Logic of a Program.
ERIC Educational Resources Information Center
Cadolino, C.; Guazzo, M.
1982-01-01
Considers problem of optimal scheduling of Boolean expression (each Boolean variable represents binary outcome of program module) on single-processor system. Optimization discussed consists of finding operand arrangement that minimizes average execution costs representing consumption of resources (elapsed time, main memory, number of…
Boolean integral calculus for digital systems
NASA Technical Reports Server (NTRS)
Tucker, J. H.; Tapia, M. A.; Bennett, A. W.
1985-01-01
The concept of Boolean integration is introduced and developed. When the changes in a desired function are specified in terms of changes in its arguments, then ways of 'integrating' (i.e., realizing) the function, if it exists, are presented. Boolean integral calculus has applications in design of logic circuits.
Loke, Desmond; Skelton, Jonathan M; Chong, Tow-Chong; Elliott, Stephen R
2016-12-21
One of the requirements for achieving faster CMOS electronics is to mitigate the unacceptably large chip areas required to steer heat away from or, more recently, toward the critical nodes of state-of-the-art devices. Thermal-guiding (TG) structures can efficiently direct heat by "meta-materials" engineering; however, some key aspects of the behavior of these systems are not fully understood. Here, we demonstrate control of the thermal-diffusion properties of TG structures by using nanometer-scale, CMOS-integrable, graphene-on-silica stacked materials through finite-element-methods simulations. It has been shown that it is possible to implement novel, controllable, thermally based Boolean-logic and spike-timing-dependent plasticity operations for advanced (neuromorphic) computing applications using such thermal-guide architectures.
ERIC Educational Resources Information Center
Miller-Whitehead, Marie
Keyword and text string searches of online library catalogs often provide different results according to library and database used and depending upon how books and journals are indexed. For this reason, online databases such as ERIC often provide tutorials and recommendations for searching their site, such as how to use Boolean search strategies.…
Bayesian probabilistic population projections for all countries.
Raftery, Adrian E; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K
2012-08-28
Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950-1990 are used for estimation, and applied to predict 1990-2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20-64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Statistical Learning of Probabilistic Nonadjacent Dependencies by Multiple-Cue Integration
ERIC Educational Resources Information Center
van den Bos, Esther; Christiansen, Morten H.; Misyak, Jennifer B.
2012-01-01
Previous studies have indicated that dependencies between nonadjacent elements can be acquired by statistical learning when each element predicts only one other element (deterministic dependencies). The present study investigates statistical learning of probabilistic nonadjacent dependencies, in which each element predicts several other elements…
Boolean Classes and Qualitative Inquiry. WCER Working Paper No. 2006-3
ERIC Educational Resources Information Center
Nathan, Mitchell J.; Jackson, Kristi
2006-01-01
The prominent role of Boolean classes in qualitative data analysis software is viewed by some as an encroachment of logical positivism on qualitative research methodology. The authors articulate an embodiment perspective, in which Boolean classes are viewed as conceptual metaphors for apprehending and manipulating data, concepts, and categories in…
Gene network analysis: from heart development to cardiac therapy.
Ferrazzi, Fulvia; Bellazzi, Riccardo; Engel, Felix B
2015-03-01
Networks offer a flexible framework to represent and analyse the complex interactions between components of cellular systems. In particular gene networks inferred from expression data can support the identification of novel hypotheses on regulatory processes. In this review we focus on the use of gene network analysis in the study of heart development. Understanding heart development will promote the elucidation of the aetiology of congenital heart disease and thus possibly improve diagnostics. Moreover, it will help to establish cardiac therapies. For example, understanding cardiac differentiation during development will help to guide stem cell differentiation required for cardiac tissue engineering or to enhance endogenous repair mechanisms. We introduce different methodological frameworks to infer networks from expression data such as Boolean and Bayesian networks. Then we present currently available temporal expression data in heart development and discuss the use of network-based approaches in published studies. Collectively, our literature-based analysis indicates that gene network analysis constitutes a promising opportunity to infer therapy-relevant regulatory processes in heart development. However, the use of network-based approaches has so far been limited by the small amount of samples in available datasets. Thus, we propose to acquire high-resolution temporal expression data to improve the mathematical descriptions of regulatory processes obtained with gene network inference methodologies. Especially probabilistic methods that accommodate the intrinsic variability of biological systems have the potential to contribute to a deeper understanding of heart development.
Algebraic model checking for Boolean gene regulatory networks.
Tran, Quoc-Nam
2011-01-01
We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.
[National Health and Nutrition Survey 2012: design and coverage].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Franco-Núñez, Aurora; Villalpando, Salvador; Cuevas-Nasu, Lucía; Gutiérrez, Juan Pablo; Rivera-Dommarco, Juan Ángel
2013-01-01
To describe the design and population coverage of the National Health and Nutrition Survey 2012 (NHNS 2012). The design of the NHNS 2012 is reported, as a probabilistic population based survey with a multi-stage and stratified sampling, as well as the sample inferential properties, the logistical procedures, and the obtained coverage. Household response rate for the NHNS 2012 was 87%, completing data from 50,528 households, where 96 031 individual interviews selected by age and 14,104 of ambulatory health services users were also obtained. The probabilistic design of the NHNS 2012 as well as its coverage allowed to generate inferences about health and nutrition conditions, health programs coverage, and access to health services. Because of their complex designs, all estimations from the NHNS 2012 must use the survey design: weights, primary sampling units, and stratus variables.
E-Referencer: Transforming Boolean OPACs to Web Search Engines.
ERIC Educational Resources Information Center
Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn
E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…
Probabilistic quantum cloning of a subset of linearly dependent states
NASA Astrophysics Data System (ADS)
Rui, Pinshu; Zhang, Wen; Liao, Yanlin; Zhang, Ziyun
2018-02-01
It is well known that a quantum state, secretly chosen from a certain set, can be probabilistically cloned with positive cloning efficiencies if and only if all the states in the set are linearly independent. In this paper, we focus on probabilistic quantum cloning of a subset of linearly dependent states. We show that a linearly-independent subset of linearly-dependent quantum states {| Ψ 1⟩,| Ψ 2⟩,…,| Ψ n ⟩} can be probabilistically cloned if and only if any state in the subset cannot be expressed as a linear superposition of the other states in the set {| Ψ 1⟩,| Ψ 2⟩,…,| Ψ n ⟩}. The optimal cloning efficiencies are also investigated.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Use of a probabilistic neural network to reduce costs of selecting construction rock
Singer, Donald A.; Bliss, James D.
2003-01-01
Rocks used as construction aggregate in temperate climates deteriorate to differing degrees because of repeated freezing and thawing. The magnitude of the deterioration depends on the rock's properties. Aggregate, including crushed carbonate rock, is required to have minimum geotechnical qualities before it can be used in asphalt and concrete. In order to reduce chances of premature and expensive repairs, extensive freeze-thaw tests are conducted on potential construction rocks. These tests typically involve 300 freeze-thaw cycles and can take four to five months to complete. Less time consuming tests that (1) predict durability as well as the extended freeze-thaw test or that (2) reduce the number of rocks subject to the extended test, could save considerable amounts of money. Here we use a probabilistic neural network to try and predict durability as determined by the freeze-thaw test using four rock properties measured on 843 limestone samples from the Kansas Department of Transportation. Modified freeze-thaw tests and less time consuming specific gravity (dry), specific gravity (saturated), and modified absorption tests were conducted on each sample. Durability factors of 95 or more as determined from the extensive freeze-thaw tests are viewed as acceptable—rocks with values below 95 are rejected. If only the modified freeze-thaw test is used to predict which rocks are acceptable, about 45% are misclassified. When 421 randomly selected samples and all four standardized and scaled variables were used to train aprobabilistic neural network, the rate of misclassification of 422 independent validation samples dropped to 28%. The network was trained so that each class (group) and each variable had its own coefficient (sigma). In an attempt to reduce errors further, an additional class was added to the training data to predict durability values greater than 84 and less than 98, resulting in only 11% of the samples misclassified. About 43% of the test data was classed by the neural net into the middle group—these rocks should be subject to full freeze-thaw tests. Thus, use of the probabilistic neural network would meanthat the extended test would only need be applied to 43% of the samples, and 11% of the rocks classed as acceptable would fail early.
BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool
ERIC Educational Resources Information Center
Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.
2006-01-01
BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Automatic Screening for Perturbations in Boolean Networks.
Schwab, Julian D; Kestler, Hans A
2018-01-01
A common approach to address biological questions in systems biology is to simulate regulatory mechanisms using dynamic models. Among others, Boolean networks can be used to model the dynamics of regulatory processes in biology. Boolean network models allow simulating the qualitative behavior of the modeled processes. A central objective in the simulation of Boolean networks is the computation of their long-term behavior-so-called attractors. These attractors are of special interest as they can often be linked to biologically relevant behaviors. Changing internal and external conditions can influence the long-term behavior of the Boolean network model. Perturbation of a Boolean network by stripping a component of the system or simulating a surplus of another element can lead to different attractors. Apparently, the number of possible perturbations and combinations of perturbations increases exponentially with the size of the network. Manually screening a set of possible components for combinations that have a desired effect on the long-term behavior can be very time consuming if not impossible. We developed a method to automatically screen for perturbations that lead to a user-specified change in the network's functioning. This method is implemented in the visual simulation framework ViSiBool utilizing satisfiability (SAT) solvers for fast exhaustive attractor search.
A look-ahead probabilistic contingency analysis framework incorporating smart sampling techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
2016-07-18
This paper describes a framework of incorporating smart sampling techniques in a probabilistic look-ahead contingency analysis application. The predictive probabilistic contingency analysis helps to reflect the impact of uncertainties caused by variable generation and load on potential violations of transmission limits.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Markovian robots: Minimal navigation strategies for active particles
NASA Astrophysics Data System (ADS)
Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando
2018-04-01
We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.
Bayesian probabilistic population projections for all countries
Raftery, Adrian E.; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K.
2012-01-01
Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a Bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using Bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950–1990 are used for estimation, and applied to predict 1990–2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20–64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades. PMID:22908249
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichara, Karim; Protopapas, Pavlos
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines
1989-09-01
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas
A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean
NASA Astrophysics Data System (ADS)
Battaglia, Gianna; Steinacher, Marco; Joos, Fortunat
2016-05-01
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72-1.05) Gt C yr-1, that is within the lower half of previously published estimates (0.4-1.8 Gt C yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jimena: efficient computing and system state identification for genetic regulatory networks.
Karl, Stefan; Dandekar, Thomas
2013-10-11
Boolean networks capture switching behavior of many naturally occurring regulatory networks. For semi-quantitative modeling, interpolation between ON and OFF states is necessary. The high degree polynomial interpolation of Boolean genetic regulatory networks (GRNs) in cellular processes such as apoptosis or proliferation allows for the modeling of a wider range of node interactions than continuous activator-inhibitor models, but suffers from scaling problems for networks which contain nodes with more than ~10 inputs. Many GRNs from literature or new gene expression experiments exceed those limitations and a new approach was developed. (i) As a part of our new GRN simulation framework Jimena we introduce and setup Boolean-tree-based data structures; (ii) corresponding algorithms greatly expedite the calculation of the polynomial interpolation in almost all cases, thereby expanding the range of networks which can be simulated by this model in reasonable time. (iii) Stable states for discrete models are efficiently counted and identified using binary decision diagrams. As application example, we show how system states can now be sampled efficiently in small up to large scale hormone disease networks (Arabidopsis thaliana development and immunity, pathogen Pseudomonas syringae and modulation by cytokinins and plant hormones). Jimena simulates currently available GRNs about 10-100 times faster than the previous implementation of the polynomial interpolation model and even greater gains are achieved for large scale-free networks. This speed-up also facilitates a much more thorough sampling of continuous state spaces which may lead to the identification of new stable states. Mutants of large networks can be constructed and analyzed very quickly enabling new insights into network robustness and behavior.
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiao-Wen; Jupp, David L. B.
1991-01-01
The bidirectional radiance or reflectance of a forest or woodland can be modeled using principles of geometric optics and Boolean models for random sets in a three dimensional space. This model may be defined at two levels, the scene includes four components; sunlight and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the hotspot that is well known in leaf and tree canopies.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
Autonomous Modeling, Statistical Complexity and Semi-annealed Treatment of Boolean Networks
NASA Astrophysics Data System (ADS)
Gong, Xinwei
This dissertation presents three studies on Boolean networks. Boolean networks are a class of mathematical systems consisting of interacting elements with binary state variables. Each element is a node with a Boolean logic gate, and the presence of interactions between any two nodes is represented by directed links. Boolean networks that implement the logic structures of real systems are studied as coarse-grained models of the real systems. Large random Boolean networks are studied with mean field approximations and used to provide a baseline of possible behaviors of large real systems. This dissertation presents one study of the former type, concerning the stable oscillation of a yeast cell-cycle oscillator, and two studies of the latter type, respectively concerning the statistical complexity of large random Boolean networks and an extension of traditional mean field techniques that accounts for the presence of short loops. In the cell-cycle oscillator study, a novel autonomous update scheme is introduced to study the stability of oscillations in small networks. A motif that corrects pulse-growing perturbations and a motif that grows pulses are identified. A combination of the two motifs is capable of sustaining stable oscillations. Examining a Boolean model of the yeast cell-cycle oscillator using an autonomous update scheme yields evidence that it is endowed with such a combination. Random Boolean networks are classified as ordered, critical or disordered based on their response to small perturbations. In the second study, random Boolean networks are taken as prototypical cases for the evaluation of two measures of complexity based on a criterion for optimal statistical prediction. One measure, defined for homogeneous systems, does not distinguish between the static spatial inhomogeneity in the ordered phase and the dynamical inhomogeneity in the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing complexity values for networks in the ordered and critical phases and for highly disordered networks, peaking somewhere in the disordered phase. Individual nodes with high complexity have, on average, a larger influence on the system dynamics. Lastly, a semi-annealed approximation that preserves the correlation between states at neighboring nodes is introduced to study a social game-inspired network model in which all links are bidirectional and all nodes have a self-input. The technique developed here is shown to yield accurate predictions of distribution of players' states, and accounts for some nontrivial collective behavior of game theoretic interest.
Barnabe, Cheryl; Thanh, Nguyen Xuan; Ohinmaa, Arto; Homik, Joanne; Barr, Susan G; Martin, Liam; Maksymowych, Walter P
2014-08-01
Sustained remission in rheumatoid arthritis (RA) results in healthcare utilization cost savings. We evaluated the variation in estimates of savings when different definitions of remission [2011 American College of Rheumatology/European League Against Rheumatism Boolean Definition, Simplified Disease Activity Index (SDAI) ≤ 3.3, Clinical Disease Activity Index (CDAI) ≤ 2.8, and Disease Activity Score-28 (DAS28) ≤ 2.6] are applied. The annual mean healthcare service utilization costs were estimated from provincial physician billing claims, outpatient visits, and hospitalizations, with linkage to clinical data from the Alberta Biologics Pharmacosurveillance Program (ABioPharm). Cost savings in patients who had a 1-year continuous period of remission were compared to those who did not, using 4 definitions of remission. In 1086 patients, sustained remission rates were 16.1% for DAS28, 8.8% for Boolean, 5.5% for CDAI, and 4.2% for SDAI. The estimated mean annual healthcare cost savings per patient achieving remission (relative to not) were SDAI $1928 (95% CI 592, 3264), DAS28 $1676 (95% CI 987, 2365), and Boolean $1259 (95% CI 417, 2100). The annual savings by CDAI remission per patient were not significant at $423 (95% CI -1757, 2602). For patients in DAS28, Boolean, and SDAI remission, savings were seen both in costs directly related to RA and its comorbidities, and in costs for non-RA-related conditions. The magnitude of the healthcare cost savings varies according to the remission definition used in classifying patient disease status. The highest point estimate for cost savings was observed in patients attaining SDAI remission and the least with the CDAI; confidence intervals for these estimates do overlap. Future pharmacoeconomic analyses should employ all response definitions in assessing the influence of treatment.
[MESGI50 study: description of a cohort on Maturity and Satisfactory Ageing].
Corominas Barnadas, Josep María; López-Pousa, Secundino; Vilalta-Franch, Joan; Calvó-Perxas, Laia; Juvinyà Canal, Dolors; Garre-Olmo, Josep
To describe the demographic, health and socio-economic characteristics of the participants in the Study on Maturity and Satisfactory Ageing in Girona (MESGI50 study). Population-based Study linked to the Survey of Health, Ageing, and Retirement in Europe (SHARE). The reference population was the inhabitants of the province of Girona (Spain) aged 50 and over. A probabilistic two-stage stratified cluster sampling according to the number of inhabitants and the degree of ageing of the population was used. Twenty-eight municipalities were randomly selected according to their type (demographically aged or young), and then stratified by the population size. The response rate was 65% with a mean of 1.7 eligible individuals per household and a final sample of 2,065 households and 3,331 participants. The design effect was 1.27. 52.9% were women and the mean age was 66.9 years (SD=11.5). The self-rated health status, hand grip strength, restriction in daily life activities and depressive symptomatology increased with age and more markedly in women. There were differences in alcohol consumption and eating patterns depending on the area of residence. The demographic, health and socio-economic characteristics during the ageing process differ depending on age group, gender, and area of residence. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Identification of probabilities.
Vitányi, Paul M B; Chater, Nick
2017-02-01
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.
A framework for the probabilistic analysis of meteotsunamis
Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.
2014-01-01
A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.
NASA Technical Reports Server (NTRS)
Windley, P. J.
1991-01-01
In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.
Nonlinear probabilistic finite element models of laminated composite shells
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Reddy, J. N.
1993-01-01
A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.
Kapo, Katherine E; McDonough, Kathleen; Federle, Thomas; Dyer, Scott; Vamshi, Raghu
2015-06-15
Environmental exposure and associated ecological risk related to down-the-drain chemicals discharged by municipal wastewater treatment plants (WWTPs) are strongly influenced by in-stream dilution of receiving waters which varies by geography, flow conditions and upstream wastewater inputs. The iSTREEM® model (American Cleaning Institute, Washington D.C.) was utilized to determine probabilistic distributions for no decay and decay-based dilution factors in mean annual and low (7Q10) flow conditions. The dilution factors derived in this study are "combined" dilution factors which account for both hydrologic dilution and cumulative upstream effluent contributions that will differ depending on the rate of in-stream decay due to biodegradation, volatilization, sorption, etc. for the chemical being evaluated. The median dilution factors estimated in this study (based on various in-stream decay rates from zero decay to a 1h half-life) for WWTP mixing zones dominated by domestic wastewater flow ranged from 132 to 609 at mean flow and 5 to 25 at low flow, while median dilution factors at drinking water intakes (mean flow) ranged from 146 to 2×10(7) depending on the in-stream decay rate. WWTPs within the iSTREEM® model were used to generate a distribution of per capita wastewater generated in the U.S. The dilution factor and per capita wastewater generation distributions developed by this work can be used to conduct probabilistic exposure assessments for down-the-drain chemicals in influent wastewater, wastewater treatment plant mixing zones and at drinking water intakes in the conterminous U.S. In addition, evaluation of types and abundance of U.S. wastewater treatment processes provided insight into treatment trends and the flow volume treated by each type of process. Moreover, removal efficiencies of chemicals can differ by treatment type. Hence, the availability of distributions for per capita wastewater production, treatment type, and dilution factors at a national level provides a series of practical and powerful tools for building probabilistic exposure models. Copyright © 2015 Elsevier B.V. All rights reserved.
Dragović, Ivana; Turajlić, Nina; Pilčević, Dejan; Petrović, Bratislav; Radojević, Dragan
2015-01-01
Fuzzy inference systems (FIS) enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD). Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis) the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated), when medical experts are not close at hand. PMID:27069500
Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.
Zhang, Jianming; Sclaroff, Stan
2016-05-01
We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.
Network dynamics and systems biology
NASA Astrophysics Data System (ADS)
Norrell, Johannes A.
The physics of complex systems has grown considerably as a field in recent decades, largely due to improved computational technology and increased availability of systems level data. One area in which physics is of growing relevance is molecular biology. A new field, systems biology, investigates features of biological systems as a whole, a strategy of particular importance for understanding emergent properties that result from a complex network of interactions. Due to the complicated nature of the systems under study, the physics of complex systems has a significant role to play in elucidating the collective behavior. In this dissertation, we explore three problems in the physics of complex systems, motivated in part by systems biology. The first of these concerns the applicability of Boolean models as an approximation of continuous systems. Studies of gene regulatory networks have employed both continuous and Boolean models to analyze the system dynamics, and the two have been found produce similar results in the cases analyzed. We ask whether or not Boolean models can generically reproduce the qualitative attractor dynamics of networks of continuously valued elements. Using a combination of analytical techniques and numerical simulations, we find that continuous networks exhibit two effects---an asymmetry between on and off states, and a decaying memory of events in each element's inputs---that are absent from synchronously updated Boolean models. We show that in simple loops these effects produce exactly the attractors that one would predict with an analysis of the stability of Boolean attractors, but in slightly more complicated topologies, they can destabilize solutions that are stable in the Boolean approximation, and can stabilize new attractors. Second, we investigate ensembles of large, random networks. Of particular interest is the transition between ordered and disordered dynamics, which is well characterized in Boolean systems. Networks at the transition point, called critical, exhibit many of the features of regulatory networks, and recent studies suggest that some specific regulatory networks are indeed near-critical. We ask whether certain statistical measures of the ensemble behavior of large continuous networks are reproduced by Boolean models. We find that, in spite of the lack of correspondence between attractors observed in smaller systems, the statistical characterization given by the continuous and Boolean models show close agreement, and the transition between order and disorder known in Boolean systems can occur in continuous systems as well. One effect that is not present in Boolean systems, the failure of information to propagate down chains of elements of arbitrary length, is present in a class of continuous networks. In these systems, a modified Boolean theory that takes into account the collective effect of propagation failure on chains throughout the network gives a good description of the observed behavior. We find that propagation failure pushes the system toward greater order, resulting in a partial or complete suppression of the disordered phase. Finally, we explore a dynamical process of direct biological relevance: asymmetric cell division in A. thaliana. The long term goal is to develop a model for the process that accurately accounts for both wild type and mutant behavior. To contribute to this endeavor, we use confocal microscopy to image roots in a SHORT-ROOT inducible mutant. We compute correlation functions between the locations of asymmetrically divided cells, and we construct stochastic models based on a few simple assumptions that accurately predict the non-zero correlations. Our result shows that intracellular processes alone cannot be responsible for the observed divisions, and that an intercell signaling mechanism could account for the measured correlations.
"Antelope": a hybrid-logic model checker for branching-time Boolean GRN analysis
2011-01-01
Background In Thomas' formalism for modeling gene regulatory networks (GRNs), branching time, where a state can have more than one possible future, plays a prominent role. By representing a certain degree of unpredictability, branching time can model several important phenomena, such as (a) asynchrony, (b) incompletely specified behavior, and (c) interaction with the environment. Introducing more than one possible future for a state, however, creates a difficulty for ordinary simulators, because infinitely many paths may appear, limiting ordinary simulators to statistical conclusions. Model checkers for branching time, by contrast, are able to prove properties in the presence of infinitely many paths. Results We have developed Antelope ("Analysis of Networks through TEmporal-LOgic sPEcifications", http://turing.iimas.unam.mx:8080/AntelopeWEB/), a model checker for analyzing and constructing Boolean GRNs. Currently, software systems for Boolean GRNs use branching time almost exclusively for asynchrony. Antelope, by contrast, also uses branching time for incompletely specified behavior and environment interaction. We show the usefulness of modeling these two phenomena in the development of a Boolean GRN of the Arabidopsis thaliana root stem cell niche. There are two obstacles to a direct approach when applying model checking to Boolean GRN analysis. First, ordinary model checkers normally only verify whether or not a given set of model states has a given property. In comparison, a model checker for Boolean GRNs is preferable if it reports the set of states having a desired property. Second, for efficiency, the expressiveness of many model checkers is limited, resulting in the inability to express some interesting properties of Boolean GRNs. Antelope tries to overcome these two drawbacks: Apart from reporting the set of all states having a given property, our model checker can express, at the expense of efficiency, some properties that ordinary model checkers (e.g., NuSMV) cannot. This additional expressiveness is achieved by employing a logic extending the standard Computation-Tree Logic (CTL) with hybrid-logic operators. Conclusions We illustrate the advantages of Antelope when (a) modeling incomplete networks and environment interaction, (b) exhibiting the set of all states having a given property, and (c) representing Boolean GRN properties with hybrid CTL. PMID:22192526
Disentangling sampling and ecological explanations underlying species-area relationships
Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.
2002-01-01
We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.
Compression of Probabilistic XML Documents
NASA Astrophysics Data System (ADS)
Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice
Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.
Durand, A I; Ipina, S L; Bermúdez de Castro, J M
2000-06-01
Parameters of a Middle Pleistocene human population such as the expected length of the female reproductive period (E(Y)), the expected interbirth interval (E(X)), the survival rate (tau) for females after the expected reproductive period, the rate (phi(2)) of women who, given that they reach first birth, do not survive to the end of the expected reproductive period, and the female infant plus juvenile mortality rate (phi(1)) have been assessed from a probabilistic standpoint provided that such a population were stationary. The hominid sample studied, the Sima de los Huesos (SH) cave site, Sierra de Atapuerca (Spain), is the most exhaustive human fossil sample currently available. Results suggest that the Atapuerca (SH) sample can derive from a stationary population. Further, in the case that the expected reproductive period ends between 37 and 40 yr of age, then 24 less, similarE(Y) less, similar27 yr, E(X)=3 yr, 0.224=tau=0.246,0.49
Integrating Multiple Data Sources for Combinatorial Marker Discovery: A Study in Tumorigenesis.
Bandyopadhyay, Sanghamitra; Mallik, Saurav
2018-01-01
Identification of combinatorial markers from multiple data sources is a challenging task in bioinformatics. Here, we propose a novel computational framework for identifying significant combinatorial markers ( s) using both gene expression and methylation data. The gene expression and methylation data are integrated into a single continuous data as well as a (post-discretized) boolean data based on their intrinsic (i.e., inverse) relationship. A novel combined score of methylation and expression data (viz., ) is introduced which is computed on the integrated continuous data for identifying initial non-redundant set of genes. Thereafter, (maximal) frequent closed homogeneous genesets are identified using a well-known biclustering algorithm applied on the integrated boolean data of the determined non-redundant set of genes. A novel sample-based weighted support ( ) is then proposed that is consecutively calculated on the integrated boolean data of the determined non-redundant set of genes in order to identify the non-redundant significant genesets. The top few resulting genesets are identified as potential s. Since our proposed method generates a smaller number of significant non-redundant genesets than those by other popular methods, the method is much faster than the others. Application of the proposed technique on an expression and a methylation data for Uterine tumor or Prostate Carcinoma produces a set of significant combination of markers. We expect that such a combination of markers will produce lower false positives than individual markers.
Theory and calculus of cubical complexes
NASA Technical Reports Server (NTRS)
Perlman, M.
1973-01-01
Combination switching networks with multiple outputs may be represented by Boolean functions. Report has been prepared which describes derivation and use of extraction algorithm that may be adapted to simplification of such simultaneous Boolean functions.
ERIC Educational Resources Information Center
Bossé, Michael J.; Adu-Gyamfi, Kwaku; Chandler, Kayla; Lynch-Davis, Kathleen
2016-01-01
Dynamic mathematical environments allow users to reify mathematical concepts through multiple representations, transform mathematical relations and organically explore mathematical properties, investigate integrated mathematics, and develop conceptual understanding. Herein, we integrate Boolean algebra, the functionalities of a dynamic…
... Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess ...
ERIC Educational Resources Information Center
Tang, Michael; David, Hyerle; Byrne, Roxanne; Tran, John
2012-01-01
This paper is a mathematical (Boolean) analysis a set of cognitive maps called Thinking Maps[R], based on Albert Upton's semantic principles developed in his seminal works, Design for Thinking (1961) and Creative Analysis (1961). Albert Upton can be seen as a brilliant thinker who was before his time or after his time depending on the future of…
Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.
ERIC Educational Resources Information Center
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…
Exploring Term Dependences in Probabilistic Information Retrieval Model.
ERIC Educational Resources Information Center
Cho, Bong-Hyun; Lee, Changki; Lee, Gary Geunbae
2003-01-01
Describes a theoretic process to apply Bahadur-Lazarsfeld expansion (BLE) to general probabilistic models and the state-of-the-art 2-Poisson model. Through experiments on two standard document collections, one in Korean and one in English, it is demonstrated that incorporation of term dependences using BLE significantly contributes to performance…
Sanchez, Robersy; Grau, Ricardo
2005-09-01
A Boolean structure of the genetic code where Boolean deductions have biological and physicochemical meanings was discussed in a previous paper. Now, from these Boolean deductions we propose to define the value of amino acid information in order to consider the genetic information system as a communication system and to introduce the semantic content of information ignored by the conventional information theory. In this proposal, the value of amino acid information is proportional to the molecular weight of amino acids with a proportional constant of about 1.96 x 10(25) bits per kg. In addition to this, for the experimental estimations of the minimum energy dissipation in genetic logic operations, we present two postulates: (1) the energy Ei (i=1,2,...,20) of amino acids in the messages conveyed by proteins is proportional to the value of information, and (2) amino acids are distributed according to their energy Ei so the amino acid population in proteins follows a Boltzmann distribution. Specifically, in the genetic message carried by the DNA from the genomes of living organisms, we found that the minimum energy dissipation in genetic logic operations was close to kTLn(2) joules per bit.
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
Lavrova, Anastasia I; Postnikov, Eugene B; Zyubin, Andrey Yu; Babak, Svetlana V
2017-04-01
We consider two approaches to modelling the cell metabolism of 6-mercaptopurine, one of the important chemotherapy drugs used for treating acute lymphocytic leukaemia: kinetic ordinary differential equations, and Boolean networks supplied with one controlling node, which takes continual values. We analyse their interplay with respect to taking into account ATP concentration as a key parameter of switching between different pathways. It is shown that the Boolean networks, which allow avoiding the complexity of general kinetic modelling, preserve the possibility of reproducing the principal switching mechanism.
Improving the quantum cost of reversible Boolean functions using reorder algorithm
NASA Astrophysics Data System (ADS)
Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf
2018-05-01
This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.
Volumetric T-spline Construction Using Boolean Operations
2013-07-01
SUBTITLE Volumetric T-spline Construction Using Boolean Operations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Acknowledgements The work of L. Liu and Y. Zhang was supported by ONR-YIP award N00014- 10-1-0698 and an ONR Grant N00014-08-1-0653. T. J.R. Hughes was sup- 16...T-spline Construction Using Boolean Operations 17 ported by ONR Grant N00014-08-1-0992, NSF GOALI CMI-0700807/0700204, NSF CMMI-1101007 and a SINTEF
Probabilistic reversal learning is impaired in Parkinson's disease
Peterson, David A.; Elliott, Christian; Song, David D.; Makeig, Scott; Sejnowski, Terrence J.; Poizner, Howard
2009-01-01
In many everyday settings, the relationship between our choices and their potentially rewarding outcomes is probabilistic and dynamic. In addition, the difficulty of the choices can vary widely. Although a large body of theoretical and empirical evidence suggests that dopamine mediates rewarded learning, the influence of dopamine in probabilistic and dynamic rewarded learning remains unclear. We adapted a probabilistic rewarded learning task originally used to study firing rates of dopamine cells in primate substantia nigra pars compacta (Morris et al. 2006) for use as a reversal learning task with humans. We sought to investigate how the dopamine depletion in Parkinson's disease (PD) affects probabilistic reward learning and adaptation to a reversal in reward contingencies. Over the course of 256 trials subjects learned to choose the more favorable from among pairs of images with small or large differences in reward probabilities. During a subsequent otherwise identical reversal phase, the reward probability contingencies for the stimuli were reversed. Seventeen Parkinson's disease (PD) patients of mild to moderate severity were studied off of their dopaminergic medications and compared to 15 age-matched controls. Compared to controls, PD patients had distinct pre- and post-reversal deficiencies depending upon the difficulty of the choices they had to learn. The patients also exhibited compromised adaptability to the reversal. A computational model of the subjects’ trial-by-trial choices demonstrated that the adaptability was sensitive to the gain with which patients weighted pre-reversal feedback. Collectively, the results implicate the nigral dopaminergic system in learning to make choices in environments with probabilistic and dynamic reward contingencies. PMID:19628022
Hirabayashi, Yasuhiko; Munakata, Yasuhiko; Miyata, Masayuki; Urata, Yukitomo; Saito, Koichi; Okuno, Hiroshi; Yoshida, Masaaki; Kodera, Takao; Watanabe, Ryu; Miyamoto, Seiya; Ishii, Tomonori; Nakazawa, Shigeshi; Takemori, Hiromitsu; Ando, Takanobu; Kanno, Takashi; Komagamine, Masataka; Kato, Ichiro; Takahashi, Yuichi; Komatsuda, Atsushi; Endo, Kojiro; Murai, Chihiro; Takakubo, Yuya; Miura, Takao; Sato, Yukio; Ichikawa, Kazunobu; Konta, Tsuneo; Chiba, Noriyuki; Muryoi, Tai; Kobayashi, Hiroko; Fujii, Hiroshi; Sekiguchi, Yukio; Hatakeyama, Akira; Ogura, Ken; Sakuraba, Hirotake; Asano, Tomoyuki; Kanazawa, Hiroshi; Suzuki, Eiji; Takasaki, Satoshi; Asakura, Kenichi; Sugisaki, Kota; Suzuki, Yoko; Takagi, Michiaki; Nakayama, Takahiro; Watanabe, Hiroshi; Miura, Keiki; Mori, Yu
2016-11-01
To evaluate the clinical and structural efficacy of tocilizumab (TCZ) during its long-term administration in patients with rheumatoid arthritis (RA). In total, 693 patients with RA who started TCZ therapy were followed for 3 years. Clinical efficacy was evaluated by DAS28-ESR and Boolean remission rates in 544 patients. Joint damage was assessed by calculating the modified total Sharp score (mTSS) in 50 patients. When the reason for discontinuation was limited to inadequate response or adverse events, the 1-, 2-, and 3-year continuation rates were 84.0%, 76.8%, and 72.2%, respectively. The mean DAS28-ESR was initially 5.1 and decreased to 2.5 at 6 months and to 2.2 at 36 months. The Boolean remission rate was initially 0.9% and increased to 21.7% at 6 months and to 32.2% at 36 months. The structural remission rates (ΔmTSS/year ≤ 0.5) were 68.8%, 78.6%, and 88.9% within the first, second, and third years, respectively. The structural remission rate at 3 years (ΔmTSS ≤ 1.5) was 66.0%, and earlier achievement of swollen joint count (SJC) of 1 or less resulted in better outcomes. TCZ was highly efficacious, and bone destruction was strongly prevented. SJC was an easy-to-use indicator of joint destruction.
Order or chaos in Boolean gene networks depends on the mean fraction of canalizing functions
NASA Astrophysics Data System (ADS)
Karlsson, Fredrik; Hörnquist, Michael
2007-10-01
We explore the connection between order/chaos in Boolean networks and the naturally occurring fraction of canalizing functions in such systems. This fraction turns out to give a very clear indication of whether the system possesses ordered or chaotic dynamics, as measured by Derrida plots, and also the degree of order when we compare different networks with the same number of vertices and edges. By studying also a wide distribution of indegrees in a network, we show that the mean probability of canalizing functions is a more reliable indicator of the type of dynamics for a finite network than the classical result on stability relating the bias to the mean indegree. Finally, we compare by direct simulations two biologically derived networks with networks of similar sizes but with power-law and Poisson distributions of indegrees, respectively. The biologically motivated networks are not more ordered than the latter, and in one case the biological network is even chaotic while the others are not.
Dynamic Network-Based Epistasis Analysis: Boolean Examples
Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.
2011-01-01
In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
Acoustic logic gates and Boolean operation based on self-collimating acoustic beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ting; Xu, Jian-yi; Cheng, Ying, E-mail: chengying@nju.edu.cn
2015-03-16
The reveal of self-collimation effect in two-dimensional (2D) photonic or acoustic crystals has opened up possibilities for signal manipulation. In this paper, we have proposed acoustic logic gates based on the linear interference of self-collimated beams in 2D sonic crystals (SCs) with line-defects. The line defects on the diagonal of the 2D square SCs are actually functioning as a 3 dB splitter. By adjusting the phase difference between two input signals, the basic Boolean logic functions such as XOR, OR, AND, and NOT are achieved both theoretically and experimentally. Due to the non-diffracting property of self-collimation beams, more complex Boolean logicmore » and algorithms such as NAND, NOR, and XNOR can be realized by cascading the basic logic gates. The achievement of acoustic logic gates and Boolean operation provides a promising approach for acoustic signal computing and manipulations.« less
Boolean networks with veto functions
NASA Astrophysics Data System (ADS)
Ebadi, Haleh; Klemm, Konstantin
2014-08-01
Boolean networks are discrete dynamical systems for modeling regulation and signaling in living cells. We investigate a particular class of Boolean functions with inhibiting inputs exerting a veto (forced zero) on the output. We give analytical expressions for the sensitivity of these functions and provide evidence for their role in natural systems. In an intracellular signal transduction network [Helikar et al., Proc. Natl. Acad. Sci. USA 105, 1913 (2008), 10.1073/pnas.0705088105], the functions with veto are over-represented by a factor exceeding the over-representation of threshold functions and canalyzing functions in the same system. In Boolean networks for control of the yeast cell cycle [Li et al., Proc. Natl. Acad. Sci. USA 101, 4781 (2004), 10.1073/pnas.0305937101; Davidich et al., PLoS ONE 3, e1672 (2008), 10.1371/journal.pone.0001672], no or minimal changes to the wiring diagrams are necessary to formulate their dynamics in terms of the veto functions introduced here.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.
2010-01-01
Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
A transition calculus for Boolean functions. [logic circuit analysis
NASA Technical Reports Server (NTRS)
Tucker, J. H.; Bennett, A. W.
1974-01-01
A transition calculus is presented for analyzing the effect of input changes on the output of logic circuits. The method is closely related to the Boolean difference, but it is more powerful. Both differentiation and integration are considered.
A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean
NASA Astrophysics Data System (ADS)
Battaglia, G.; Steinacher, M.; Joos, F.
2015-12-01
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally-constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Latin-Hypercube scheme to construct a 1000 member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates either a strong, a weak or no dependency on CaCO3 saturation is assumed. Median (68 % confidence interval) global CaCO3 export is 0.82 (0.67-0.98) Gt PIC yr-1, within the lower half of previously published estimates (0.4-1.8 Gt PIC yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. Dissolution within the 200 to 1500 m depth range (0.33; 0.26-0.40 Gt PIC yr-1) is substantially lower than inferred from the TA*-CFC age method (1 ± 0.5 Gt PIC yr-1). The latter estimate is likely biased high as the TA*-CFC method neglects transport. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport time scales for the different setups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest to apply saturation-independent dissolution rates in Earth System Models to minimise computational costs.
Newgard, Craig; Malveau, Susan; Staudenmayer, Kristan; Wang, N. Ewen; Hsia, Renee Y.; Mann, N. Clay; Holmes, James F.; Kuppermann, Nathan; Haukoos, Jason S.; Bulger, Eileen M.; Dai, Mengtao; Cook, Lawrence J.
2012-01-01
Objectives The objective was to evaluate the process of using existing data sources, probabilistic linkage, and multiple imputation to create large population-based injury databases matched to outcomes. Methods This was a retrospective cohort study of injured children and adults transported by 94 emergency medical systems (EMS) agencies to 122 hospitals in seven regions of the western United States over a 36-month period (2006 to 2008). All injured patients evaluated by EMS personnel within specific geographic catchment areas were included, regardless of field disposition or outcome. The authors performed probabilistic linkage of EMS records to four hospital and postdischarge data sources (emergency department [ED] data, patient discharge data, trauma registries, and vital statistics files) and then handled missing values using multiple imputation. The authors compare and evaluate matched records, match rates (proportion of matches among eligible patients), and injury outcomes within and across sites. Results There were 381,719 injured patients evaluated by EMS personnel in the seven regions. Among transported patients, match rates ranged from 14.9% to 87.5% and were directly affected by the availability of hospital data sources and proportion of missing values for key linkage variables. For vital statistics records (1-year mortality), estimated match rates ranged from 88.0% to 98.7%. Use of multiple imputation (compared to complete case analysis) reduced bias for injury outcomes, although sample size, percentage missing, type of variable, and combined-site versus single-site imputation models all affected the resulting estimates and variance. Conclusions This project demonstrates the feasibility and describes the process of constructing population-based injury databases across multiple phases of care using existing data sources and commonly available analytic methods. Attention to key linkage variables and decisions for handling missing values can be used to increase match rates between data sources, minimize bias, and preserve sampling design. PMID:22506952
Characterizing short-term stability for Boolean networks over any distribution of transfer functions
Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...
2016-07-05
Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.
Inferring Boolean network states from partial information
2013-01-01
Networks of molecular interactions regulate key processes in living cells. Therefore, understanding their functionality is a high priority in advancing biological knowledge. Boolean networks are often used to describe cellular networks mathematically and are fitted to experimental datasets. The fitting often results in ambiguities since the interpretation of the measurements is not straightforward and since the data contain noise. In order to facilitate a more reliable mapping between datasets and Boolean networks, we develop an algorithm that infers network trajectories from a dataset distorted by noise. We analyze our algorithm theoretically and demonstrate its accuracy using simulation and microarray expression data. PMID:24006954
2014-01-01
Background mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. Results We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. Conclusions The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may arise. This approach also allows for an optimal use of analysis tools, which is especially important when additional complexities or regulatory mechanisms are included. This approach can provide a robust platform for dissecting translation, and results in an improved predictive framework for applications in systems and synthetic biology. PMID:24576337
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, Y., E-mail: h1312101@mailg.nc-toyama.ac.jp; Takada, E.; Fujisaki, A.
Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was appliedmore » using the EM algorithm and k-means algorithm.« less
Quantum algorithms on Walsh transform and Hamming distance for Boolean functions
NASA Astrophysics Data System (ADS)
Xie, Zhengwei; Qiu, Daowen; Cai, Guangya
2018-06-01
Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
NASA Astrophysics Data System (ADS)
Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.
2017-05-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.
Development of Boolean calculus and its applications. [digital systems design
NASA Technical Reports Server (NTRS)
Tapia, M. A.
1980-01-01
The development of Boolean calculus for its application to developing digital system design methodologies that would reduce system complexity, size, cost, speed, power requirements, etc., is discussed. Synthesis procedures for logic circuits are examined particularly asynchronous circuits using clock triggered flip flops.
Advanced Feedback Methods in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1985-01-01
In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…
Compact universal logic gates realized using quantization of current in nanodevices.
Zhang, Wancheng; Wu, Nan-Jian; Yang, Fuhua
2007-12-12
This paper proposes novel universal logic gates using the current quantization characteristics of nanodevices. In nanodevices like the electron waveguide (EW) and single-electron (SE) turnstile, the channel current is a staircase quantized function of its control voltage. We use this unique characteristic to compactly realize Boolean functions. First we present the concept of the periodic-threshold threshold logic gate (PTTG), and we build a compact PTTG using EW and SE turnstiles. We show that an arbitrary three-input Boolean function can be realized with a single PTTG, and an arbitrary four-input Boolean function can be realized by using two PTTGs. We then use one PTTG to build a universal programmable two-input logic gate which can be used to realize all two-input Boolean functions. We also build a programmable three-input logic gate by using one PTTG. Compared with linear threshold logic gates, with the PTTG one can build digital circuits more compactly. The proposed PTTGs are promising for future smart nanoscale digital system use.
Phase transition in NK-Kauffman networks and its correction for Boolean irreducibility
NASA Astrophysics Data System (ADS)
Zertuche, Federico
2014-05-01
In a series of articles published in 1986, Derrida and his colleagues studied two mean field treatments (the quenched and the annealed) for NK-Kauffman networks. Their main results lead to a phase transition curve Kc 2 pc(1-pc)=1 (0
Controllability and observability of Boolean networks arising from biology
NASA Astrophysics Data System (ADS)
Li, Rui; Yang, Meng; Chu, Tianguang
2015-02-01
Boolean networks are currently receiving considerable attention as a computational scheme for system level analysis and modeling of biological systems. Studying control-related problems in Boolean networks may reveal new insights into the intrinsic control in complex biological systems and enable us to develop strategies for manipulating biological systems using exogenous inputs. This paper considers controllability and observability of Boolean biological networks. We propose a new approach, which draws from the rich theory of symbolic computation, to solve the problems. Consequently, simple necessary and sufficient conditions for reachability, controllability, and observability are obtained, and algorithmic tests for controllability and observability which are based on the Gröbner basis method are presented. As practical applications, we apply the proposed approach to several different biological systems, namely, the mammalian cell-cycle network, the T-cell activation network, the large granular lymphocyte survival signaling network, and the Drosophila segment polarity network, gaining novel insights into the control and/or monitoring of the specific biological systems.
Solving a discrete model of the lac operon using Z3
NASA Astrophysics Data System (ADS)
Gutierrez, Natalia A.
2014-05-01
A discrete model for the Lcac Operon is solved using the SMT-solver Z3. Traditionally the Lac Operon is formulated in a continuous math model. This model is a system of ordinary differential equations. Here, it was considerated as a discrete model, based on a Boolean red. The biological problem of Lac Operon is enunciated as a problem of Boolean satisfiability, and it is solved using an STM-solver named Z3. Z3 is a powerful solver that allows understanding the basic dynamic of the Lac Operon in an easier and more efficient way. The multi-stability of the Lac Operon can be easily computed with Z3. The code that solves the Boolean red can be written in Python language or SMT-Lib language. Both languages were used in local version of the program as online version of Z3. For future investigations it is proposed to solve the Boolean red of Lac Operon using others SMT-solvers as cvc4, alt-ergo, mathsat and yices.
NASA Astrophysics Data System (ADS)
Sandri, Laura; Jolly, Gill; Lindsay, Jan; Howe, Tracy; Marzocchi, Warner
2010-05-01
One of the main challenges of modern volcanology is to provide the public with robust and useful information for decision-making in land-use planning and in emergency management. From the scientific point of view, this translates into reliable and quantitative long- and short-term volcanic hazard assessment and eruption forecasting. Because of the complexity in characterizing volcanic events, and of the natural variability of volcanic processes, a probabilistic approach is more suitable than deterministic modeling. In recent years, two probabilistic codes have been developed for quantitative short- and long-term eruption forecasting (BET_EF) and volcanic hazard assessment (BET_VH). Both of them are based on a Bayesian Event Tree, in which volcanic events are seen as a chain of logical steps of increasing detail. At each node of the tree, the probability is computed by taking into account different sources of information, such as geological and volcanological models, past occurrences, expert opinion and numerical modeling of volcanic phenomena. Since it is a Bayesian tool, the output probability is not a single number, but a probability distribution accounting for aleatory and epistemic uncertainty. In this study, we apply BET_VH in order to quantify the long-term volcanic hazard due to base surge invasion in the region around Auckland, New Zealand's most populous city. Here, small basaltic eruptions from monogenetic cones pose a considerable risk to the city in case of phreatomagmatic activity: evidence for base surges are not uncommon in deposits from past events. Currently, we are particularly focussing on the scenario simulated during Exercise Ruaumoko, a national disaster exercise based on the build-up to an eruption in the Auckland Volcanic Field. Based on recent papers by Marzocchi and Woo, we suggest a possible quantitative strategy to link probabilistic scientific output and Boolean decision making. It is based on cost-benefit analysis, in which all costs and benefits of mitigation actions have to be evaluated and compared, weighting them with the probability of occurrence of a specific threatening volcanic event. An action should be taken when the benefit of that action outweighs the costs. It is worth remarking that this strategy does not guarantee to recommend a decision that we would have taken with the benefit of hindsight. However, this strategy will be successful over the long-tem. Furthermore, it has the overwhelming advantage of providing a quantitative decision rule that is set before any emergency, and thus it will be justifiable at any stage of the process. In our present application, we are trying to set up a cost-benefit scheme for the call of an evacuation to protect people in the Auckland Volcanic Field against base surge invasion. Considering the heterogeneity of the urban environment and the size of the region at risk, we propose a cost-benefit scheme that is space dependent, to take into account higher costs when an eruption threatens sensible sites for the city and/or the nation, such as the international airport or the harbour. Finally, we compare our findings with the present Contingency Plan for Auckland.
Denison, Stephanie; Trikutam, Pallavi; Xu, Fei
2014-08-01
A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar contexts with different outcomes. Can infants rapidly acquire probabilistic physical knowledge, such as some leaves fall and some glasses break by simply observing the statistical regularity with which objects behave and apply that knowledge in subsequent reasoning? We taught 11-month-old infants physical constraints on objects and asked them to reason about the probability of different outcomes when objects were drawn from a large distribution. Infants could have reasoned either by using the perceptual similarity between the samples and larger distributions or by applying physical rules to adjust base rates and estimate the probabilities. Infants learned the physical constraints quickly and used them to estimate probabilities, rather than relying on similarity, a version of the representativeness heuristic. These results indicate that infants can rapidly and flexibly acquire physical knowledge about objects following very brief exposure and apply it in subsequent reasoning. PsycINFO Database Record (c) 2014 APA, all rights reserved.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Affine Equivalence and Constructions of Cryptographically Strong Boolean Functions
2013-09-01
manner is crucial for today’s global citizen. We want our financial transactions over the Internet to get processed without error. Cyber warfare between...encryption and decryption processes . An asymmetric cipher uses different keys to encrypt and decrypt a message, and the connection between the encryption and...Depending on how a symmetric cipher processes a message before encryption or de- cryption, a symmetric cipher can be further classified into a block or
Perturbation propagation in random and evolved Boolean networks
NASA Astrophysics Data System (ADS)
Fretter, Christoph; Szejka, Agnes; Drossel, Barbara
2009-03-01
In this paper, we investigate the propagation of perturbations in Boolean networks by evaluating the Derrida plot and its modifications. We show that even small random Boolean networks agree well with the predictions of the annealed approximation, but nonrandom networks show a very different behaviour. We focus on networks that were evolved for high dynamical robustness. The most important conclusion is that the simple distinction between frozen, critical and chaotic networks is no longer useful, since such evolved networks can display the properties of all three types of networks. Furthermore, we evaluate a simplified empirical network and show how its specific state space properties are reflected in the modified Derrida plots.
Multilayer neural networks with extensively many hidden units.
Rosen-Zvi, M; Engel, A; Kanter, I
2001-08-13
The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.
Generalized probabilistic scale space for image restoration.
Wong, Alexander; Mishra, Akshaya K
2010-10-01
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
The objective of this work is to elucidate biological networks underlying cellular tipping points using time-course data. We discretized the high-content imaging (HCI) data and inferred Boolean networks (BNs) that could accurately predict dynamic cellular trajectories. We found t...
Boolean linear differential operators on elementary cellular automata
NASA Astrophysics Data System (ADS)
Martín Del Rey, Ángel
2014-12-01
In this paper, the notion of boolean linear differential operator (BLDO) on elementary cellular automata (ECA) is introduced and some of their more important properties are studied. Special attention is paid to those differential operators whose coefficients are the ECA with rule numbers 90 and 150.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yun, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Cui, Wan-Zhao, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Wang, Hong-Guang
2015-05-15
Effects of the secondary electron emission (SEE) phenomenon of metal surface on the multipactor analysis of microwave components are investigated numerically and experimentally in this paper. Both the secondary electron yield (SEY) and the emitted energy spectrum measurements are performed on silver plated samples for accurate description of the SEE phenomenon. A phenomenological probabilistic model based on SEE physics is utilized and fitted accurately to the measured SEY and emitted energy spectrum of the conditioned surface material of microwave components. Specially, the phenomenological probabilistic model is extended to the low primary energy end lower than 20 eV mathematically, since no accuratemore » measurement data can be obtained. Embedding the phenomenological probabilistic model into the Electromagnetic Particle-In-Cell (EM-PIC) method, the electronic resonant multipacting in microwave components can be tracked and hence the multipactor threshold can be predicted. The threshold prediction error of the transformer and the coaxial filter is 0.12 dB and 1.5 dB, respectively. Simulation results demonstrate that the discharge threshold is strongly dependent on the SEYs and its energy spectrum in the low energy end (lower than 50 eV). Multipacting simulation results agree quite well with experiments in practical components, while the phenomenological probabilistic model fit both the SEY and the emission energy spectrum better than the traditionally used model and distribution. The EM-PIC simulation method with the phenomenological probabilistic model for the surface collision simulation has been demonstrated for predicting the multipactor threshold in metal components for space application.« less
NASA Technical Reports Server (NTRS)
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
Describing the What and Why of Students' Difficulties in Boolean Logic
ERIC Educational Resources Information Center
Herman, Geoffrey L.; Loui, Michael C.; Kaczmarczyk, Lisa; Zilles, Craig
2012-01-01
The ability to reason with formal logic is a foundational skill for computer scientists and computer engineers that scaffolds the abilities to design, debug, and optimize. By interviewing students about their understanding of propositional logic and their ability to translate from English specifications to Boolean expressions, we characterized…
Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López
2014-12-01
This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
Stability Depends on Positive Autoregulation in Boolean Gene Regulatory Networks
Pinho, Ricardo; Garcia, Victor; Irimia, Manuel; Feldman, Marcus W.
2014-01-01
Network motifs have been identified as building blocks of regulatory networks, including gene regulatory networks (GRNs). The most basic motif, autoregulation, has been associated with bistability (when positive) and with homeostasis and robustness to noise (when negative), but its general importance in network behavior is poorly understood. Moreover, how specific autoregulatory motifs are selected during evolution and how this relates to robustness is largely unknown. Here, we used a class of GRN models, Boolean networks, to investigate the relationship between autoregulation and network stability and robustness under various conditions. We ran evolutionary simulation experiments for different models of selection, including mutation and recombination. Each generation simulated the development of a population of organisms modeled by GRNs. We found that stability and robustness positively correlate with autoregulation; in all investigated scenarios, stable networks had mostly positive autoregulation. Assuming biological networks correspond to stable networks, these results suggest that biological networks should often be dominated by positive autoregulatory loops. This seems to be the case for most studied eukaryotic transcription factor networks, including those in yeast, flies and mammals. PMID:25375153
PROBABILISTIC MODELING FOR ADVANCED HUMAN EXPOSURE ASSESSMENT
Human exposures to environmental pollutants widely vary depending on the emission patterns that result in microenvironmental pollutant concentrations, as well as behavioral factors that determine the extent of an individual's contact with these pollutants. Probabilistic human exp...
Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis
NASA Astrophysics Data System (ADS)
Wang, Weiguang; Fu, Jianyu
2018-02-01
Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.
Bayesian inference of T Tauri star properties using multi-wavelength survey photometry
NASA Astrophysics Data System (ADS)
Barentsen, Geert; Vink, J. S.; Drew, J. E.; Sale, S. E.
2013-03-01
There are many pertinent open issues in the area of star and planet formation. Large statistical samples of young stars across star-forming regions are needed to trigger a breakthrough in our understanding, but most optical studies are based on a wide variety of spectrographs and analysis methods, which introduces large biases. Here we show how graphical Bayesian networks can be employed to construct a hierarchical probabilistic model which allows pre-main-sequence ages, masses, accretion rates and extinctions to be estimated using two widely available photometric survey data bases (Isaac Newton Telescope Photometric Hα Survey r'/Hα/i' and Two Micron All Sky Survey J-band magnitudes). Because our approach does not rely on spectroscopy, it can easily be applied to ho-mogeneously study the large number of clusters for which Gaia will yield membership lists. We explain how the analysis is carried out using the Markov chain Monte Carlo method and provide PYTHON source code. We then demonstrate its use on 587 known low-mass members of the star-forming region NGC 2264 (Cone Nebula), arriving at a median age of 3.0 Myr, an accretion fraction of 20 ± 2 per cent and a median accretion rate of 10-8.4 M⊙ yr-1. The Bayesian analysis formulated in this work delivers results which are in agreement with spectroscopic studies already in the literature, but achieves this with great efficiency by depending only on photometry. It is a significant step forward from previous photometric studies because the probabilistic approach ensures that nuisance parameters, such as extinction and distance, are fully included in the analysis with a clear picture on any degeneracies.
Han, Bin; Liu, Yating; You, Yan; Xu, Jia; Zhou, Jian; Zhang, Jiefeng; Niu, Can; Zhang, Nan; He, Fei; Ding, Xiao; Bai, Zhipeng
2016-10-01
Assessment of the health risks resulting from exposure to ambient polycyclic aromatic hydrocarbons (PAHs) is limited by the lack of environmental exposure data among different subpopulations. To assess the exposure cancer risk of particulate carcinogenic polycyclic aromatic hydrocarbon pollution for the elderly, this study conducted a personal exposure measurement campaign for particulate PAHs in a community of Tianjin, a city in northern China. Personal exposure samples were collected from the elderly in non-heating (August-September, 2009) and heating periods (November-December, 2009), and 12 PAHs individuals were analyzed for risk estimation. Questionnaire and time-activity log were also recorded for each person. The probabilistic risk assessment model was integrated with Toxic Equivalent Factors (TEFs). Considering that the estimation of the applied dose for a given air pollutant is dependent on the inhalation rate, the inhalation rate from both EPA exposure factor book was applied to calculate the carcinogenic risk in this study. Monte Carlo simulation was used as a probabilistic risk assessment model, and risk simulation results indicated that the inhalation-ILCR values for both male and female subjects followed a lognormal distribution with a mean of 4.81 × 10 -6 and 4.57 × 10 -6 , respectively. Furthermore, the 95 % probability lung cancer risks were greater than the USEPA acceptable level of 10 -6 for both men and women through the inhalation route, revealing that exposure to PAHs posed an unacceptable potential cancer risk for the elderly in this study. As a result, some measures should be taken to reduce PAHs pollution and the exposure level to decrease the cancer risk for the general population, especially for the elderly.
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Evolution of canalizing Boolean networks
NASA Astrophysics Data System (ADS)
Szejka, A.; Drossel, B.
2007-04-01
Boolean networks with canalizing functions are used to model gene regulatory networks. In order to learn how such networks may behave under evolutionary forces, we simulate the evolution of a single Boolean network by means of an adaptive walk, which allows us to explore the fitness landscape. Mutations change the connections and the functions of the nodes. Our fitness criterion is the robustness of the dynamical attractors against small perturbations. We find that with this fitness criterion the global maximum is always reached and that there is a huge neutral space of 100% fitness. Furthermore, in spite of having such a high degree of robustness, the evolved networks still share many features with “chaotic” networks.
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
WOVOdat, A Worldwide Volcano Unrest Database, to Improve Eruption Forecasts
NASA Astrophysics Data System (ADS)
Widiwijayanti, C.; Costa, F.; Win, N. T. Z.; Tan, K.; Newhall, C. G.; Ratdomopurbo, A.
2015-12-01
WOVOdat is the World Organization of Volcano Observatories' Database of Volcanic Unrest. An international effort to develop common standards for compiling and storing data on volcanic unrests in a centralized database and freely web-accessible for reference during volcanic crises, comparative studies, and basic research on pre-eruption processes. WOVOdat will be to volcanology as an epidemiological database is to medicine. Despite the large spectrum of monitoring techniques, the interpretation of monitoring data throughout the evolution of the unrest and making timely forecasts remain the most challenging tasks for volcanologists. The field of eruption forecasting is becoming more quantitative, based on the understanding of the pre-eruptive magmatic processes and dynamic interaction between variables that are at play in a volcanic system. Such forecasts must also acknowledge and express the uncertainties, therefore most of current research in this field focused on the application of event tree analysis to reflect multiple possible scenarios and the probability of each scenario. Such forecasts are critically dependent on comprehensive and authoritative global volcano unrest data sets - the very information currently collected in WOVOdat. As the database becomes more complete, Boolean searches, side-by-side digital and thus scalable comparisons of unrest, pattern recognition, will generate reliable results. Statistical distribution obtained from WOVOdat can be then used to estimate the probabilities of each scenario after specific patterns of unrest. We established main web interface for data submission and visualizations, and have now incorporated ~20% of worldwide unrest data into the database, covering more than 100 eruptive episodes. In the upcoming years we will concentrate in acquiring data from volcano observatories develop a robust data query interface, optimizing data mining, and creating tools by which WOVOdat can be used for probabilistic eruption forecasting. The more data in WOVOdat, the more useful it will be.
NASA Astrophysics Data System (ADS)
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
Raufelder, Diana; Boehme, Rebecca; Romund, Lydia; Golde, Sabrina; Lorenz, Robert C.; Gleich, Tobias; Beck, Anne
2016-01-01
This multi-methodological study applied functional magnetic resonance imaging to investigate neural activation in a group of adolescent students (N = 88) during a probabilistic reinforcement learning task. We related patterns of emerging brain activity and individual learning rates to socio-motivational (in-)dependence manifested in four different motivation types (MTs): (1) peer-dependent MT, (2) teacher-dependent MT, (3) peer-and-teacher-dependent MT, (4) peer-and-teacher-independent MT. A multinomial regression analysis revealed that the individual learning rate predicts students’ membership to the independent MT, or the peer-and-teacher-dependent MT. Additionally, the striatum, a brain region associated with behavioral adaptation and flexibility, showed increased learning-related activation in students with motivational independence. Moreover, the prefrontal cortex, which is involved in behavioral control, was more active in students of the peer-and-teacher-dependent MT. Overall, this study offers new insights into the interplay of motivation and learning with (1) a focus on inter-individual differences in the role of peers and teachers as source of students’ individual motivation and (2) its potential neurobiological basis. PMID:27199873
Understanding genetic regulatory networks
NASA Astrophysics Data System (ADS)
Kauffman, Stuart
2003-04-01
Random Boolean networks (RBM) were introduced about 35 years ago as first crude models of genetic regulatory networks. RBNs are comprised of N on-off genes, connected by a randomly assigned regulatory wiring diagram where each gene has K inputs, and each gene is controlled by a randomly assigned Boolean function. This procedure samples at random from the ensemble of all possible NK Boolean networks. The central ideas are to study the typical, or generic properties of this ensemble, and see 1) whether characteristic differences appear as K and biases in Boolean functions are introducted, and 2) whether a subclass of this ensemble has properties matching real cells. Such networks behave in an ordered or a chaotic regime, with a phase transition, "the edge of chaos" between the two regimes. Networks with continuous variables exhibit the same two regimes. Substantial evidence suggests that real cells are in the ordered regime. A key concept is that of an attractor. This is a reentrant trajectory of states of the network, called a state cycle. The central biological interpretation is that cell types are attractors. A number of properties differentiate the ordered and chaotic regimes. These include the size and number of attractors, the existence in the ordered regime of a percolating "sea" of genes frozen in the on or off state, with a remainder of isolated twinkling islands of genes, a power law distribution of avalanches of gene activity changes following perturbation to a single gene in the ordered regime versus a similar power law distribution plus a spike of enormous avalanches of gene changes in the chaotic regime, and the existence of branching pathway of "differentiation" between attractors induced by perturbations in the ordered regime. Noise is serious issue, since noise disrupts attractors. But numerical evidence suggests that attractors can be made very stable to noise, and meanwhile, metaplasias may be a biological manifestation of noise. As we learn more about the wiring diagram and constraints on rules controlling real genes, we can build refined ensembles reflecting these properties, study the generic properties of the refined ensembles, and hope to gain insight into the dynamics of real cells.
Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.
2003-01-01
Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.
Circulant Matrices and Affine Equivalence of Monomial Rotation Symmetric Boolean Functions
2015-01-01
definitions , including monomial rotation symmetric (MRS) Boolean functions and affine equivalence, and a known result for such quadratic functions...degree of the MRS is, we have a similar result as [40, Theorem 1.1] for n = 4p (p prime), or squarefree integers n, which along with our Theorem 5.2
User Practices in Keyword and Boolean Searching on an Online Public Access Catalog.
ERIC Educational Resources Information Center
Ensor, Pat
1992-01-01
Discussion of keyword and Boolean searching techniques in online public access catalogs (OPACs) focuses on a study conducted at Indiana State University that examined users' attitudes toward searching on NOTIS (Northwestern Online Total Integrated System). Relevant literature is reviewed, and implications for library instruction are suggested. (17…
Using Vector and Extended Boolean Matching in an Expert System for Selecting Foster Homes.
ERIC Educational Resources Information Center
Fox, Edward A.; Winett, Sheila G.
1990-01-01
Describes FOCES (Foster Care Expert System), a prototype expert system for choosing foster care placements for children which integrates information retrieval techniques with artificial intelligence. The use of prototypes and queries in Prolog routines, extended Boolean matching, and vector correlation are explained, as well as evaluation by…
A Construction of Boolean Functions with Good Cryptographic Properties
2014-01-01
be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT...2008, LNCS 5350, Springer–Verlag, 2008, pp. 425–440. [10] C. Carlet and K. Feng, “An Infinite Class of Balanced Vectorial Boolean Functions with Optimum
Using computer algebra and SMT solvers in algebraic biology
NASA Astrophysics Data System (ADS)
Pineda Osorio, Mateo
2014-05-01
Biologic processes are represented as Boolean networks, in a discrete time. The dynamics within these networks are approached with the help of SMT Solvers and the use of computer algebra. Software such as Maple and Z3 was used in this case. The number of stationary states for each network was calculated. The network studied here corresponds to the immune system under the effects of drastic mood changes. Mood is considered as a Boolean variable that affects the entire dynamics of the immune system, changing the Boolean satisfiability and the number of stationary states of the immune network. Results obtained show Z3's great potential as a SMT Solver. Some of these results were verified in Maple, even though it showed not to be as suitable for the problem approach. The solving code was constructed using Z3-Python and Z3-SMT-LiB. Results obtained are important in biology systems and are expected to help in the design of immune therapies. As a future line of research, more complex Boolean network representations of the immune system as well as the whole psychological apparatus are suggested.
Therapeutic target discovery using Boolean network attractors: improvements of kali
Guziolowski, Carito
2018-01-01
In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modelling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are (i) the possibility to work on asynchronous Boolean networks, (ii) a finer assessment of therapeutic targets and (iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modelled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but cannot replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery. PMID:29515890
3D Boolean operations in virtual surgical planning.
Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun
2017-10-01
Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Toda, S.; Stein, R.
2003-01-01
Two M ??? 6 well-recorded strike-slip earthquakes struck just 4 km and 48 days apart in Kagoshima prefecture, Japan, in 1997, providing an opportunity to study earthquake interaction. Aftershocks are abundant where the Coulomb stress is calculated to have been increased by the first event, and they abruptly stop where the stress is dropped by the second event. This ability of the main shocks to toggle seismicity on and off argues that static stress changes play a major role in exciting aftershocks, whereas the dynamic Coulomb stresses, which should only promote seismicity, appear to play a secondary role. If true, the net stress changes from a sequence of earthquakes might be expected to govern the subsequent seismicity distribution. However, adding the stress changes from the two Kagoshima events does not fully capture the ensuing seismicity, such as its rate change, temporal decay, or migration away from the ends of the ruptures. We therefore implement a stress transfer model that incorporates rate/state friction, in which seismicity is treated as a sequence of independent nucleation events that are dependent on the fault slip, slip rate, and elapsed time since the last event. The model reproduces the temporal response of seismicity to successive stress changes, including toggling, decay, and aftershock migration. Nevertheless, the match of observed to predicted seismicity is quite imperfect, due perhaps to inadequate knowledge of several model parameters. However, to demonstrate the potential of this approach, we build a probabilistic forecast of larger earthquakes on the expected rate of small aftershocks, taking advantage of the large statistical sample the small shocks afford. Not surprisingly, such probabilities are highly time- and location-dependent: During the first decade after the main shocks, the seismicity rate and the chance of successive large shocks are about an order of magnitude higher than the background rate and are concentrated exclusively in the stress triggering zones. Copyright 2003 by the American Geophysical Uion.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.
Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping
2018-01-01
Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Zeng, Yuehua
2018-01-01
The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.
Processing of probabilistic information in weight perception and motor prediction.
Trampenau, Leif; van Eimeren, Thilo; Kuhtz-Buschbeck, Johann
2017-02-01
We studied the effects of probabilistic cues, i.e., of information of limited certainty, in the context of an action task (GL: grip-lift) and of a perceptual task (WP: weight perception). Normal subjects (n = 22) saw four different probabilistic visual cues, each of which announced the likely weight of an object. In the GL task, the object was grasped and lifted with a pinch grip, and the peak force rates indicated that the grip and load forces were scaled predictively according to the probabilistic information. The WP task provided the expected heaviness related to each probabilistic cue; the participants gradually adjusted the object's weight until its heaviness matched the expected weight for a given cue. Subjects were randomly assigned to two groups: one started with the GL task and the other one with the WP task. The four different probabilistic cues influenced weight adjustments in the WP task and peak force rates in the GL task in a similar manner. The interpretation and utilization of the probabilistic information was critically influenced by the initial task. Participants who started with the WP task classified the four probabilistic cues into four distinct categories and applied these categories to the subsequent GL task. On the other side, participants who started with the GL task applied three distinct categories to the four cues and retained this classification in the following WP task. The initial strategy, once established, determined the way how the probabilistic information was interpreted and implemented.
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Rosqvist, N H; Dollar, L H; Fourie, A B
2005-08-01
In this paper, we study and quantify pollutant concentrations after long-term leaching at relatively low flow rates and residual concentrations after heavy flushing of a 0.14 m3 municipal solid waste sample. Moreover, water flow and solute transport through preferential flow paths are studied by model interpretation of experimental break-through curves (BTCs), generated by tracer tests. In the study it was found that high concentrations of chloride remain after several pore volumes of water have percolated through the waste sample. The residual concentration was found to be considerably higher than can be predicted by degradation models. For model interpretations of the experimental BTCs, two probabilistic model approaches were applied, the transfer function model and the Lagrangian transport formulation. The experimental BTCs indicated the presence of preferential flow through the waste mass and the model interpretation of the BTCs suggested that between 19 and 41% of the total water content participated in the transport of solute through preferential flow paths. In the study, the occurrence of preferential flow was found to be dependent on the flow rate in the sense that a high flow rate enhances the preferential flow. However, to fully quantify the possible dependence between flow rate and preferential flow, experiments on a broader range of experimental conditions are suggested. The chloride washout curve obtained over the 4-year study period shows that as a consequence of the water flow in favoured flow paths, bypassing other parts of the solid waste body, the leachate quality may reflect only the flow paths and their surroundings. The results in this study thus show that in order to improve long-term prediction of the leachate quality and quantity the magnitude of the preferential water flow through a landfill must be taken into account.
The Processing of Extraposed Structures in English
Levy, Roger; Fedorenko, Evelina; Breen, Mara; Gibson, Ted
2012-01-01
In most languages, most of the syntactic dependency relations found in any given sentence are PROJECTIVE: the word-word dependencies in the sentence do not cross each other. Some syntactic dependency relations, however, are NON-PROJECTIVE: some of their word-word dependencies cross each other. Non-projective dependencies are both rarer and more computationally complex than projective dependencies; hence, it is of natural interest to investigate whether there are any processing costs specific to non-projective dependencies, and whether factors known to influence processing of projective dependencies also affect non-projective dependency processing. We report three self-paced reading studies, together with corpus and sentence completion studies, investigating the comprehension difficulty associated with the non-projective dependencies created by the extraposition of relative clauses in English. We find that extraposition over either verbs or prepositional phrases creates comprehension difficulty, and that this difficulty is consistent with probabilistic syntactic expectations estimated from corpora. Furthermore, we find that manipulating the expectation that a given noun will have a postmodifying relative clause can modulate and even neutralize the difficulty associated with extraposition. Our experiments rule out accounts based purely on derivational complexity and/or dependency locality in terms of linear positioning. Our results demonstrate that comprehenders maintain probabilistic syntactic expectations that persist beyond projective-dependency structures, and suggest that it may be possible to explain observed patterns of comprehension difficulty associated with extraposition entirely through probabilistic expectations. PMID:22035959
Interpolation of the Extended Boolean Retrieval Model.
ERIC Educational Resources Information Center
Zanger, Daniel Z.
2002-01-01
Presents an interpolation theorem for an extended Boolean information retrieval model. Results show that whenever two or more documents are similarly ranked at any two points for a query containing exactly two terms, then they are similarly ranked at all points in between; and that results can fail for queries with more than two terms. (Author/LRW)
The Concept of the "Imploded Boolean Search": A Case Study with Undergraduate Chemistry Students
ERIC Educational Resources Information Center
Tomaszewski, Robert
2016-01-01
Critical thinking and analytical problem-solving skills in research involves using different search strategies. A proposed concept for an "Imploded Boolean Search" combines three unique identifiable field types to perform a search: keyword(s), numerical value(s), and a chemical structure or reaction. The object of this type of search is…
Boolean network inference from time series data incorporating prior biological knowledge.
Haider, Saad; Pal, Ranadip
2012-01-01
Numerous approaches exist for modeling of genetic regulatory networks (GRNs) but the low sampling rates often employed in biological studies prevents the inference of detailed models from experimental data. In this paper, we analyze the issues involved in estimating a model of a GRN from single cell line time series data with limited time points. We present an inference approach for a Boolean Network (BN) model of a GRN from limited transcriptomic or proteomic time series data based on prior biological knowledge of connectivity, constraints on attractor structure and robust design. We applied our inference approach to 6 time point transcriptomic data on Human Mammary Epithelial Cell line (HMEC) after application of Epidermal Growth Factor (EGF) and generated a BN with a plausible biological structure satisfying the data. We further defined and applied a similarity measure to compare synthetic BNs and BNs generated through the proposed approach constructed from transitions of various paths of the synthetic BNs. We have also compared the performance of our algorithm with two existing BN inference algorithms. Through theoretical analysis and simulations, we showed the rarity of arriving at a BN from limited time series data with plausible biological structure using random connectivity and absence of structure in data. The framework when applied to experimental data and data generated from synthetic BNs were able to estimate BNs with high similarity scores. Comparison with existing BN inference algorithms showed the better performance of our proposed algorithm for limited time series data. The proposed framework can also be applied to optimize the connectivity of a GRN from experimental data when the prior biological knowledge on regulators is limited or not unique.
Energy and criticality in random Boolean networks
NASA Astrophysics Data System (ADS)
Andrecut, M.; Kauffman, S. A.
2008-06-01
The central issue of the research on the Random Boolean Networks (RBNs) model is the characterization of the critical transition between ordered and chaotic phases. Here, we discuss an approach based on the ‘energy’ associated with the unsatisfiability of the Boolean functions in the RBNs model, which provides an upper bound estimation for the energy used in computation. We show that in the ordered phase the RBNs are in a ‘dissipative’ regime, performing mostly ‘downhill’ moves on the ‘energy’ landscape. Also, we show that in the disordered phase the RBNs have to ‘hillclimb’ on the ‘energy’ landscape in order to perform computation. The analytical results, obtained using Derrida's approximation method, are in complete agreement with numerical simulations.
Optical programmable Boolean logic unit.
Chattopadhyay, Tanay
2011-11-10
Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.
Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R
2017-01-01
Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
Probabilistic grammatical model for helix‐helix contact site classification
2013-01-01
Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601
Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks
NASA Technical Reports Server (NTRS)
Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris
2015-01-01
Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.
Pecevski, Dejan; Maass, Wolfgang
2016-01-01
Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123
Pecevski, Dejan
2016-01-01
Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
Probabilistic classifiers with high-dimensional data
Kim, Kyung In; Simon, Richard
2011-01-01
For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probabilistic classifiers and developed 2 extensions of the Bayesian compound covariate classifier. Based on simulation studies and analysis of gene expression microarray data, we found that proper probabilistic classification is more difficult than deterministic classification. It is important to ensure that a probabilistic classifier is well calibrated or at least not “anticonservative” using the methods developed here. We provide this evaluation for several probabilistic classifiers and also evaluate their refinement as a function of sample size under weak and strong signal conditions. We also present a cross-validation method for evaluating the calibration and refinement of any probabilistic classifier on any data set. PMID:21087946
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
NASA Astrophysics Data System (ADS)
Albertson, J. D.
2015-12-01
Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large uncertainties in current approaches. In this paper, we describe results from a series of near-source (< 30 m) controlled methane releases where an instrumented van was used to measure methane concentrations during both fixed location sampling and during mobile traverses immediately downwind of the source. The measurements were used to evaluate the application of EPA Method 33A for estimating methane emissions downwind of a source and also to test the application of a new probabilistic approach for estimating emission rates from mobile traverse data.
NASA Astrophysics Data System (ADS)
Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.
2016-02-01
A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.
Designing Networks that are Capable of Self-Healing and Adapting
2017-04-01
from statistical mechanics, combinatorics, boolean networks, and numerical simulations, and inspired by design principles from biological networks, we... principles for self-healing networks, and applications, and construct an all-possible-paths model for network adaptation. 2015-11-16 UNIT CONVERSION...combinatorics, boolean networks, and numerical simulations, and inspired by design principles from biological networks, we will undertake the fol
Probabilistic modelling of flood events using the entropy copula
NASA Astrophysics Data System (ADS)
Li, Fan; Zheng, Qian
2016-11-01
The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Andresen, Juan Carlos; Janzen, Katharina; Katzgraber, Helmut G.
2013-03-01
We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free graphs in a magnetic field. Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show, in agreement with analytical calculations, that the system exhibits a de Almeida-Thouless line. Furthermore, we study avalanches in the system at zero temperature to see if the system displays self-organized criticality. This would suggest that damage (avalanches) can spread across the whole system with nonzero probability, i.e., that Boolean decision problems on scale-free networks with competing interactions are fragile when not in thermal equilibrium.
Uchida, Y.; Takada, E.; Fujisaki, A.; Isobe, M.; Shinohara, K.; Tomita, H.; Kawarabayashi, J.; Iguchi, T.
2014-01-01
Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was applied using the EM algorithm and k-means algorithm. PMID:25430297
Probability misjudgment, cognitive ability, and belief in the paranormal.
Musch, Jochen; Ehrenberg, Katja
2002-05-01
According to the probability misjudgment account of paranormal belief (Blackmore & Troscianko, 1985), believers in the paranormal tend to wrongly attribute remarkable coincidences to paranormal causes rather than chance. Previous studies have shown that belief in the paranormal is indeed positively related to error rates in probabilistic reasoning. General cognitive ability could account for a relationship between these two variables without assuming a causal role of probabilistic reasoning in the forming of paranormal beliefs, however. To test this alternative explanation, a belief in the paranormal scale (BPS) and a battery of probabilistic reasoning tasks were administered to 123 university students. Confirming previous findings, a significant correlation between BPS scores and error rates in probabilistic reasoning was observed. This relationship disappeared, however, when cognitive ability as measured by final examination grades was controlled for. Lower cognitive ability correlated substantially with belief in the paranormal. This finding suggests that differences in general cognitive performance rather than specific probabilistic reasoning skills provide the basis for paranormal beliefs.
Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...
NASA Astrophysics Data System (ADS)
Kotb, Amer
2015-05-01
The performance of an all-optical NOR gate is numerically simulated and investigated. The NOR Boolean function is realized by using a semiconductor optical amplifier (SOA) incorporated in Mach-Zehnder interferometer (MZI) arms and exploiting the nonlinear effect of two-photon absorption (TPA). If the input pulse intensities is adjusting to be high enough, the TPA-induced phase change can be larger than the regular gain-induced phase change and hence support ultrafast operation in the dual rail switching mode. The numerical study is carried out by taking into account the effect of the amplified spontaneous emission (ASE). The dependence of the output quality factor ( Q-factor) on critical data signals and SOAs parameters is examined and assessed. The obtained results confirm that the NOR gate implemented with the proposed scheme is capable of operating at a data rate of 250 Gb/s with logical correctness and high output Q-factor.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Implementing neural nets with programmable logic
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.
1988-01-01
Networks of Boolean programmable logic modules are presented as one purely digital class of artificial neural nets. The approach contrasts with the continuous analog framework usually suggested. Programmable logic networks are capable of handling many neural-net applications. They avoid some of the limitations of threshold logic networks and present distinct opportunities. The network nodes are called dynamically programmable logic modules. They can be implemented with digitally controlled demultiplexers. Each node performs a Boolean function of its inputs which can be dynamically assigned. The overall network is therefore a combinational circuit and its outputs are Boolean global functions of the network's input variables. The approach offers definite advantages for VLSI implementation, namely, a regular architecture with limited connectivity, simplicity of the control machinery, natural modularity, and the support of a mature technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera-Durón, R. R., E-mail: roberto.rivera@ipicyt.edu.mx; Campos-Cantón, E., E-mail: eric.campos@ipicyt.edu.mx; Campos-Cantón, I.
We present the design of an autonomous time-delay Boolean network realized with readily available electronic components. Through simulations and experiments that account for the detailed nonlinear response of each circuit element, we demonstrate that a network with five Boolean nodes displays complex behavior. Furthermore, we show that the dynamics of two identical networks display near-instantaneous synchronization to a periodic state when forced by a common periodic Boolean signal. A theoretical analysis of the network reveals the conditions under which complex behavior is expected in an individual network and the occurrence of synchronization in the forced networks. This research will enablemore » future experiments on autonomous time-delay networks using readily available electronic components with dynamics on a slow enough time-scale so that inexpensive data collection systems can faithfully record the dynamics.« less
Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Gong, Xinwei; Socolar, Joshua
2009-03-01
Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.
Students’ difficulties in probabilistic problem-solving
NASA Astrophysics Data System (ADS)
Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.
2018-03-01
There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
Current-induced modulation of backward spin-waves in metallic microstructures
NASA Astrophysics Data System (ADS)
Sato, Nana; Lee, Seo-Won; Lee, Kyung-Jin; Sekiguchi, Koji
2017-03-01
We performed a propagating spin-wave spectroscopy for backward spin-waves in ferromagnetic metallic microstructures in the presence of electric-current. Even with the smaller current injection of 5× {{10}10} A m-2 into ferromagnetic microwires, the backward spin-waves exhibit a gigantic 200 MHz frequency shift and a 15% amplitude change, showing 60 times larger modulation compared to previous reports. Systematic experiments by measuring dependences on a film thickness of mirowire, on the wave-vector of spin-wave, and on the magnitude of bias field, we revealed that for the backward spin-waves a distribution of internal magnetic field generated by electric-current efficiently modulates the frequency and amplitude of spin-waves. The gigantic frequency and amplitude changes were reproduced by a micromagnetics simulation, predicting that the current-injection of 5× {{10}11} A m-2 allows 3 GHz frequency shift. The effective coupling between electric-current and backward spin-waves has a potential to build up a logic control method which encodes signals into the phase and amplitude of spin-waves. The metallic magnonics cooperating with electronics could suggest highly integrated magnonic circuits both in Boolean and non-Boolean principles.
Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Lomunscio, Alessio
2004-01-01
We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.
Feedback Controller Design for the Synchronization of Boolean Control Networks.
Liu, Yang; Sun, Liangjie; Lu, Jianquan; Liang, Jinling
2016-09-01
This brief investigates the partial and complete synchronization of two Boolean control networks (BCNs). Necessary and sufficient conditions for partial and complete synchronization are established by the algebraic representations of logical dynamics. An algorithm is obtained to construct the feedback controller that guarantees the synchronization of master and slave BCNs. Two biological examples are provided to illustrate the effectiveness of the obtained results.
Computer Aided Instruction for a Course in Boolean Algebra and Logic Design. Final Report (Revised).
ERIC Educational Resources Information Center
Roy, Rob
The use of computers to prepare deficient college and graduate students for courses that build upon previously acquired information would solve the growing problem of professors who must spend up to one third of their class time in review of material. But examination of students who were taught Boolean Algebra and Logic Design by means of Computer…
Frontal and Parietal Contributions to Probabilistic Association Learning
Rushby, Jacqueline A.; Vercammen, Ans; Loo, Colleen; Short, Brooke
2011-01-01
Neuroimaging studies have shown both dorsolateral prefrontal (DLPFC) and inferior parietal cortex (iPARC) activation during probabilistic association learning. Whether these cortical brain regions are necessary for probabilistic association learning is presently unknown. Participants' ability to acquire probabilistic associations was assessed during disruptive 1 Hz repetitive transcranial magnetic stimulation (rTMS) of the left DLPFC, left iPARC, and sham using a crossover single-blind design. On subsequent sessions, performance improved relative to baseline except during DLPFC rTMS that disrupted the early acquisition beneficial effect of prior exposure. A second experiment examining rTMS effects on task-naive participants showed that neither DLPFC rTMS nor sham influenced naive acquisition of probabilistic associations. A third experiment examining consecutive administration of the probabilistic association learning test revealed early trial interference from previous exposure to different probability schedules. These experiments, showing disrupted acquisition of probabilistic associations by rTMS only during subsequent sessions with an intervening night's sleep, suggest that the DLPFC may facilitate early access to learned strategies or prior task-related memories via consolidation. Although neuroimaging studies implicate DLPFC and iPARC in probabilistic association learning, the present findings suggest that early acquisition of the probabilistic cue-outcome associations in task-naive participants is not dependent on either region. PMID:21216842
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
The role of linguistic experience in the processing of probabilistic information in production.
Gustafson, Erin; Goldrick, Matthew
2018-01-01
Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Deriving Laws from Ordering Relations
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.
2003-01-01
It took much effort in the early days of non-Euclidean geometry to break away from the mindset that all spaces are flat and that two distinct parallel lines do not cross. Up to that point, all that was known was Euclidean geometry, and it was difficult to imagine anything else. We have suffered a similar handicap brought on by the enormous relevance of Boolean algebra to the problems of our age-logic and set theory. Previously, I demonstrated that the algebra of questions is not Boolean, but rather is described by the free distributive algebra. To get to this stage took much effort, as many obstacles-most self-placed-had to be overcome. As Boolean algebras were all I had ever known, it was almost impossible for me to imagine working with an algebra where elements do not have complements. With this realization, it became very clear that the sum and product rules of probability theory at the most basic level had absolutely nothing to do with the Boolean algebra of logical statements. Instead, a measure of degree of inclusion can be invented for many different partially ordered sets, and the sum and product rules fall out of the associativity and distributivity of the algebra. To reinforce this very important idea, this paper will go over how these constructions are made, while focusing on the underlying assumptions. I will derive the sum and product rules for a distributive lattice in general and demonstrate how this leads to probability theory on the Boolean lattice and is related to the calculus of quantum mechanical amplitudes on the partially ordered set of experimental setups. I will also discuss the rules that can be derived from modular lattices and their relevance to the cross-ratio of projective geometry.
Identification of control targets in Boolean molecular network models via computational algebra.
Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard
2016-09-23
Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.
Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care.
Sesen, M Berkan; Peake, Michael D; Banares-Alcantara, Rene; Tse, Donald; Kadir, Timor; Stanley, Roz; Gleeson, Fergus; Brady, Michael
2014-09-06
Multidisciplinary team (MDT) meetings are becoming the model of care for cancer patients worldwide. While MDTs have improved the quality of cancer care, the meetings impose substantial time pressure on the members, who generally attend several such MDTs. We describe Lung Cancer Assistant (LCA), a clinical decision support (CDS) prototype designed to assist the experts in the treatment selection decisions in the lung cancer MDTs. A novel feature of LCA is its ability to provide rule-based and probabilistic decision support within a single platform. The guideline-based CDS is based on clinical guideline rules, while the probabilistic CDS is based on a Bayesian network trained on the English Lung Cancer Audit Database (LUCADA). We assess rule-based and probabilistic recommendations based on their concordances with the treatments recorded in LUCADA. Our results reveal that the guideline rule-based recommendations perform well in simulating the recorded treatments with exact and partial concordance rates of 0.57 and 0.79, respectively. On the other hand, the exact and partial concordance rates achieved with probabilistic results are relatively poorer with 0.27 and 0.76. However, probabilistic decision support fulfils a complementary role in providing accurate survival estimations. Compared to recorded treatments, both CDS approaches promote higher resection rates and multimodality treatments.
The computational core and fixed point organization in Boolean networks
NASA Astrophysics Data System (ADS)
Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.
2006-03-01
In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.
Observability of Boolean multiplex control networks
NASA Astrophysics Data System (ADS)
Wu, Yuhu; Xu, Jingxue; Sun, Xi-Ming; Wang, Wei
2017-04-01
Boolean multiplex (multilevel) networks (BMNs) are currently receiving considerable attention as theoretical arguments for modeling of biological systems and system level analysis. Studying control-related problems in BMNs may not only provide new views into the intrinsic control in complex biological systems, but also enable us to develop a method for manipulating biological systems using exogenous inputs. In this article, the observability of the Boolean multiplex control networks (BMCNs) are studied. First, the dynamical model and structure of BMCNs with control inputs and outputs are constructed. By using of Semi-Tensor Product (STP) approach, the logical dynamics of BMCNs is converted into an equivalent algebraic representation. Then, the observability of the BMCNs with two different kinds of control inputs is investigated by giving necessary and sufficient conditions. Finally, examples are given to illustrate the efficiency of the obtained theoretical results.
Boolean network representation of contagion dynamics during a financial crisis
NASA Astrophysics Data System (ADS)
Caetano, Marco Antonio Leonel; Yoneyama, Takashi
2015-01-01
This work presents a network model for representation of the evolution of certain patterns of economic behavior. More specifically, after representing the agents as points in a space in which each dimension associated to a relevant economic variable, their relative "motions" that can be either stationary or discordant, are coded into a boolean network. Patterns with stationary averages indicate the maintenance of status quo, whereas discordant patterns represent aggregation of new agent into the cluster or departure from the former policies. The changing patterns can be embedded into a network representation, particularly using the concept of autocatalytic boolean networks. As a case study, the economic tendencies of the BRIC countries + Argentina were studied. Although Argentina is not included in the cluster formed by BRIC countries, it tends to follow the BRIC members because of strong commercial ties.
Applying probabilistic well-performance parameters to assessments of shale-gas resources
Charpentier, Ronald R.; Cook, Troy
2010-01-01
In assessing continuous oil and gas resources, such as shale gas, it is important to describe not only the ultimately producible volumes, but also the expected well performance. This description is critical to any cost analysis or production scheduling. A probabilistic approach facilitates (1) the inclusion of variability in well performance within a continuous accumulation, and (2) the use of data from developed accumulations as analogs for the assessment of undeveloped accumulations. In assessing continuous oil and gas resources of the United States, the U.S. Geological Survey analyzed production data from many shale-gas accumulations. Analyses of four of these accumulations (the Barnett, Woodford, Fayetteville, and Haynesville shales) are presented here as examples of the variability of well performance. For example, the distribution of initial monthly production rates for Barnett vertical wells shows a noticeable change with time, first increasing because of improved completion practices, then decreasing from a combination of decreased reservoir pressure (in infill wells) and drilling in less productive areas. Within a partially developed accumulation, historical production data from that accumulation can be used to estimate production characteristics of undrilled areas. An understanding of the probabilistic relations between variables, such as between initial production and decline rates, can improve estimates of ultimate production. Time trends or spatial trends in production data can be clarified by plots and maps. The data can also be divided into subsets depending on well-drilling or well-completion techniques, such as vertical in relation to horizontal wells. For hypothetical or lightly developed accumulations, one can either make comparisons to a specific well-developed accumulation or to the entire range of available developed accumulations. Comparison of the distributions of initial monthly production rates of the four shale-gas accumulations that were studied shows substantial overlap. However, because of differences in decline rates among them, the resulting estimated ultimate recovery (EUR) distributions are considerably different.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses
Myers, Risa B.; Herskovic, Jorge R.
2011-01-01
Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292
Fast probabilistic file fingerprinting for big data
2013-01-01
Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565
Video rate morphological processor based on a redundant number representation
NASA Astrophysics Data System (ADS)
Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.
1992-03-01
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
A stochastic and dynamical view of pluripotency in mouse embryonic stem cells
Lee, Esther J.
2018-01-01
Pluripotent embryonic stem cells are of paramount importance for biomedical sciences because of their innate ability for self-renewal and differentiation into all major cell lines. The fateful decision to exit or remain in the pluripotent state is regulated by complex genetic regulatory networks. The rapid growth of single-cell sequencing data has greatly stimulated applications of statistical and machine learning methods for inferring topologies of pluripotency regulating genetic networks. The inferred network topologies, however, often only encode Boolean information while remaining silent about the roles of dynamics and molecular stochasticity inherent in gene expression. Herein we develop a framework for systematically extending Boolean-level network topologies into higher resolution models of networks which explicitly account for the promoter architectures and gene state switching dynamics. We show the framework to be useful for disentangling the various contributions that gene switching, external signaling, and network topology make to the global heterogeneity and dynamics of transcription factor populations. We find the pluripotent state of the network to be a steady state which is robust to global variations of gene switching rates which we argue are a good proxy for epigenetic states of individual promoters. The temporal dynamics of exiting the pluripotent state, on the other hand, is significantly influenced by the rates of genetic switching which makes cells more responsive to changes in extracellular signals. PMID:29451874
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
How can we model selectively neutral density dependence in evolutionary games.
Argasinski, Krzysztof; Kozłowski, Jan
2008-03-01
The problem of density dependence appears in all approaches to the modelling of population dynamics. It is pertinent to classic models (i.e., Lotka-Volterra's), and also population genetics and game theoretical models related to the replicator dynamics. There is no density dependence in the classic formulation of replicator dynamics, which means that population size may grow to infinity. Therefore the question arises: How is unlimited population growth suppressed in frequency-dependent models? Two categories of solutions can be found in the literature. In the first, replicator dynamics is independent of background fitness. In the second type of solution, a multiplicative suppression coefficient is used, as in a logistic equation. Both approaches have disadvantages. The first one is incompatible with the methods of life history theory and basic probabilistic intuitions. The logistic type of suppression of per capita growth rate stops trajectories of selection when population size reaches the maximal value (carrying capacity); hence this method does not satisfy selective neutrality. To overcome these difficulties, we must explicitly consider turn-over of individuals dependent on mortality rate. This new approach leads to two interesting predictions. First, the equilibrium value of population size is lower than carrying capacity and depends on the mortality rate. Second, although the phase portrait of selection trajectories is the same as in density-independent replicator dynamics, pace of selection slows down when population size approaches equilibrium, and then remains constant and dependent on the rate of turn-over of individuals.
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Discrete interference modeling via boolean algebra.
Beckhoff, Gerhard
2011-01-01
Two types of boolean functions are considered, the locus function of n variables, and the interval function of ν = n - 1 variables. A 1-1 mapping is given that takes elements (cells) of the interval function to antidual pairs of elements in the locus function, and vice versa. A set of ν binary codewords representing the intervals are defined and used to generate the codewords of all genomic regions. Next a diallelic three-point system is reviewed in the light of boolean functions, which leads to redefining complete interference by a logic function. Together with the upper bound of noninterference already defined by a boolean function, it confines the region of interference. Extensions of these two functions to any finite number of ν are straightforward, but have been also made in terms of variables taken from the inclusion-exclusion principle (expressing "at least" and "exactly equal to" a decimal integer). Two coefficients of coincidence for systems with more than three loci are defined and discussed, one using the average of several individual coefficients and the other taking as coefficient a real number between zero and one. Finally, by way of a malfunction of the mod-2 addition, it is shown that a four-point system may produce two different functions, one of which exhibiting loss of a class of odd recombinants.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Andresen, Juan Carlos; Moore, M. A.; Katzgraber, Helmut G.
2014-02-01
We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free networks in an external bias (magnetic field). Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First, we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show that the system has a spin-glass phase in a field, i.e., exhibits a de Almeida-Thouless line. Furthermore, we study avalanche distributions when the system is driven by a field at zero temperature to test if the system displays self-organized criticality. Numerical results suggest that avalanches (damage) can spread across the whole system with nonzero probability when the decay exponent of the interaction degree is less than or equal to 2, i.e., that Boolean decision problems on scale-free networks with competing interactions can be fragile when not in thermal equilibrium.
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
Probabilistic Causal Analysis for System Safety Risk Assessments in Commercial Air Transport
NASA Technical Reports Server (NTRS)
Luxhoj, James T.
2003-01-01
Aviation is one of the critical modes of our national transportation system. As such, it is essential that new technologies be continually developed to ensure that a safe mode of transportation becomes even safer in the future. The NASA Aviation Safety Program (AvSP) is managing the development of new technologies and interventions aimed at reducing the fatal aviation accident rate by a factor of 5 by year 2007 and by a factor of 10 by year 2022. A portfolio assessment is currently being conducted to determine the projected impact that the new technologies and/or interventions may have on reducing aviation safety system risk. This paper reports on advanced risk analytics that combine the use of a human error taxonomy, probabilistic Bayesian Belief Networks, and case-based scenarios to assess a relative risk intensity metric. A sample case is used for illustrative purposes.
On transitions in the behaviour of tabu search algorithm TabuCol for graph colouring
NASA Astrophysics Data System (ADS)
Chalupa, D.
2018-01-01
Even though tabu search is one of the most popular metaheuristic search strategies, its understanding in terms of behavioural transitions and parameter tuning is still very limited. In this paper, we present a theoretical and experimental study of a popular tabu search algorithm TabuCol for graph colouring. We show that for some instances, there are sharp transitions in the behaviour of TabuCol, depending on the value of tabu tenure parameter. The location of this transition depends on graph structure and may also depend on its size. This is further supported by an experimental study of success rate profiles, which we define as an empirical measure of these transitions. We study the success rate profiles for a range of graph colouring instances, from 2-colouring of trees and forests to several instances from the DIMACS benchmark. These reveal that TabuCol may exhibit a spectrum of different behaviours ranging from simple transitions to highly complex probabilistic behaviour.
Learning Bayesian Networks from Correlated Data
NASA Astrophysics Data System (ADS)
Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola
2016-05-01
Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.
1987-06-11
illustrative examples throughout. The book seems adecuate for a beginners class if instructor complements book with his or her own material. Ada: An...point, boolean, character, and enumeration) are taught, but proper declaration of types and subtypes are fully covered. Flowcharts are used to design the...placed on accurately following the stated requirements and sample run. Normally, students have one week to complete each project. A flowchart showing the
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
Probabilistic model for quick detection of dissimilar binary images
NASA Astrophysics Data System (ADS)
Mustafa, Adnan A. Y.
2015-09-01
We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.
Dual Roles for Spike Signaling in Cortical Neural Populations
Ballard, Dana H.; Jehee, Janneke F. M.
2011-01-01
A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Bonawitz, Elizabeth; Denison, Stephanie; Griffiths, Thomas L; Gopnik, Alison
2014-10-01
Although probabilistic models of cognitive development have become increasingly prevalent, one challenge is to account for how children might cope with a potentially vast number of possible hypotheses. We propose that children might address this problem by 'sampling' hypotheses from a probability distribution. We discuss empirical results demonstrating signatures of sampling, which offer an explanation for the variability of children's responses. The sampling hypothesis provides an algorithmic account of how children might address computationally intractable problems and suggests a way to make sense of their 'noisy' behavior. Copyright © 2014 Elsevier Ltd. All rights reserved.
Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.
Haefner, Ralf M; Berkes, Pietro; Fiser, József
2016-05-04
We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng
2011-03-01
The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.
Factors associated with falls among older adults living in institutions
2013-01-01
Background Falls have enormous impact in older adults. Yet, there is insufficient evidence regarding the effectiveness of preventive interventions in this setting. The objectives were to measure the frequency of falls and associated factors among older people living institutions. Methods Data were obtained from a survey on a probabilistic sample of residents aged ≥65 years, drawn in 1998-99 from institutions of Madrid (Spain). Residents, their caregivers, and facility physicians were interviewed. Fall rates were computed based on the number of physician-reported falls in the preceding 30 days. Adjusted rate ratios were computed using negative binomial regression models, including age, sex, cognitive status, functional dependence, number of diseases, and polypharmacy. Results The final sample comprised 733 residents. The fall rate was 2.4 falls per person-year (95% confidence interval [CI], 2.04-2.82). The strongest risk factor was number of diseases, with an adjusted rate ratio (RR) of 1.32 (95% CI, 1.17-1.50) for each additional diagnosis. Other variables associated with falls were: urinary incontinence (RR = 2.56 [95% CI, 1.32-4.94]); antidepressant use (RR = 2.32 [95% CI, 1.22-4.40]); arrhythmias (RR = 2.00 [95% CI, 1.05-3.81]); and polypharmacy (RR = 1.07 [95% CI, 0.95-1.21], for each additional medication). The attributable fraction for number of diseases (with reference to those with ≤ 1 condition) was 84% (95% CI, 45-95%). Conclusions Number of diseases was the main risk factor for falls in this population of institutionalized older adults. Other variables associated with falls, probably more amenable to preventive action, were urinary incontinence, antidepressants, arrhythmias, and polypharmacy. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/3916151157277337 PMID:23320746
Exact Algorithms for Output Encoding, State Assignment and Four-Level Boolean Minimization
1989-10-01
APPROVED FOR PUBLIC DISTRIBUTION • DTIC MASSACHUSETTS INTITUTE OF TECHNOLOGY M VLSI PUBLICATIONSJAN 17 1990 VLSI Memo No. 89-569 JN. 9October 1989...nunijize large funclions exacly within reasonable amocunt. of CPt targeting twro-level logic imnplemientations involve finding ap- time. However, thle ,, m ...0(NV!) m ~iimizations . n5 10 The inptut encoding problemt can be exactly solved using mrultiple-valued Boolean nimuization. We present an exact (a) (b
Probabilistic population projections with migration uncertainty
Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.
2016-01-01
We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571
Huang, Wei Tao; Luo, Hong Qun; Li, Nian Bing
2014-05-06
The most serious, and yet unsolved, problem of constructing molecular computing devices consists in connecting all of these molecular events into a usable device. This report demonstrates the use of Boolean logic tree for analyzing the chemical event network based on graphene, organic dye, thrombin aptamer, and Fenton reaction, organizing and connecting these basic chemical events. And this chemical event network can be utilized to implement fluorescent combinatorial logic (including basic logic gates and complex integrated logic circuits) and fuzzy logic computing. On the basis of the Boolean logic tree analysis and logic computing, these basic chemical events can be considered as programmable "words" and chemical interactions as "syntax" logic rules to construct molecular search engine for performing intelligent molecular search query. Our approach is helpful in developing the advanced logic program based on molecules for application in biosensing, nanotechnology, and drug delivery.
A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells
Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.
2017-01-01
Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402
Tracking perturbations in Boolean networks with spectral methods
NASA Astrophysics Data System (ADS)
Kesseli, Juha; Rämö, Pauli; Yli-Harja, Olli
2005-08-01
In this paper we present a method for predicting the spread of perturbations in Boolean networks. The method is applicable to networks that have no regular topology. The prediction of perturbations can be performed easily by using a presented result which enables the efficient computation of the required iterative formulas. This result is based on abstract Fourier transform of the functions in the network. In this paper the method is applied to show the spread of perturbations in networks containing a distribution of functions found from biological data. The advances in the study of the spread of perturbations can directly be applied to enable ways of quantifying chaos in Boolean networks. Derrida plots over an arbitrary number of time steps can be computed and thus distributions of functions compared with each other with respect to the amount of order they create in random networks.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
A hypothesis for delayed dynamic earthquake triggering
Parsons, T.
2005-01-01
It's uncertain whether more near-field earthquakes are triggered by static or dynamic stress changes. This ratio matters because static earthquake interactions are increasingly incorporated into probabilistic forecasts. Recent studies were unable to demonstrate all predictions from the static-stress-change hypothesis, particularly seismicity rate reductions. However, current dynamic stress change hypotheses do not explain delayed earthquake triggering and Omori's law. Here I show numerically that if seismic waves can alter some frictional contacts in neighboring fault zones, then dynamic triggering might cause delayed triggering and an Omori-law response. The hypothesis depends on faults following a rate/state friction law, and on seismic waves changing the mean critical slip distance (Dc) at nucleation zones.
Relative multiplexing for minimising switching in linear-optical quantum computing
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Cable, Hugo; Mendoza, Gabriel J.; Shadbolt, Pete; Silverstone, Joshua W.; Carolan, Jacques; Thompson, Mark G.; O'Brien, Jeremy L.; Rudolph, Terry
2017-06-01
Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript we introduce relative multiplexing (RMUX), a general-purpose optimisation which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state by Gimeno-Segovia et al (2015 Phys. Rev. Lett. 115 020502), and we find that RMUX allows an 2.4× increase in loss tolerance for this architecture.
Time dependent variation of carrying capacity of prestressed precast beam
NASA Astrophysics Data System (ADS)
Le, Tuan D.; Konečný, Petr; Matečková, Pavlína
2018-04-01
The article deals with the evaluation of the precast concrete element time dependent carrying capacity. The variation of the resistance is inherited property of laboratory as well as in-situ members. Thus the specification of highest, yet possible, laboratory sample resistance is important with respect to evaluation of laboratory experiments based on the test machine loading capabilities. The ultimate capacity is evaluated through the bending moment resistance of a simply supported prestressed concrete beam. The probabilistic assessment is applied. Scatter of random variables of compressive strength of concrete and effective height of the cross section is considered. Monte Carlo simulation technique is used to investigate the performance of the cross section of the beam with changes of tendons’ positions and compressive strength of concrete.
Spatiotemporal movement planning and rapid adaptation for manual interaction.
Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan
2013-01-01
Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.
Optimization of Contrast Detection Power with Probabilistic Behavioral Information
Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim
2012-01-01
Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984
A Software Tool for Quantitative Seismicity Analysis - ZMAP
NASA Astrophysics Data System (ADS)
Wiemer, S.; Gerstenberger, M.
2001-12-01
Earthquake catalogs are probably the most basic product of seismology, and remain arguably the most useful for tectonic studies. Modern seismograph networks can locate up to 100,000 earthquakes annually, providing a continuous and sometime overwhelming stream of data. ZMAP is a set of tools driven by a graphical user interface (GUI), designed to help seismologists analyze catalog data. ZMAP is primarily a research tool suited to the evaluation of catalog quality and to addressing specific hypotheses; however, it can also be useful in routine network operations. Examples of ZMAP features include catalog quality assessment (artifacts, completeness, explosion contamination), interactive data exploration, mapping transients in seismicity (rate changes, b-values, p-values), fractal dimension analysis and stress tensor inversions. Roughly 100 scientists worldwide have used the software at least occasionally. About 30 peer-reviewed publications have made use of ZMAP. ZMAP code is open source, written in the commercial software language Matlab by the Mathworks, a widely used software in the natural sciences. ZMAP was first published in 1994, and has continued to grow over the past 7 years. Recently, we released ZMAP v.6. The poster will introduce the features of ZMAP. We will specifically focus on ZMAP features related to time-dependent probabilistic hazard assessment. We are currently implementing a ZMAP based system that computes probabilistic hazard maps, which combine the stationary background hazard as well as aftershock and foreshock hazard into a comprehensive time dependent probabilistic hazard map. These maps will be displayed in near real time on the Internet. This poster is also intended as a forum for ZMAP users to provide feedback and discuss the future of ZMAP.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
ASSESSING THE ECOLOGICAL CONDITION OF A COASTAL PLAIN WATERSHED USING A PROBABILISTIC SURVEY DESIGN
Using a probabilistic survey design, we assessed the ecological condition of the Florida (USA) portion of the Escambia River watershed using selected environmental and benthic macroinvertebrate data. Macroinvertebrates were sampled at 28 sites during July-August 1996, and 3414 i...
Hierarchical Bayesian Modeling of Fluid-Induced Seismicity
NASA Astrophysics Data System (ADS)
Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.
2017-11-01
In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.
Emergence of diversity in homogeneous coupled Boolean networks
NASA Astrophysics Data System (ADS)
Kang, Chris; Aguilar, Boris; Shmulevich, Ilya
2018-05-01
The origin of multicellularity in metazoa is one of the fundamental questions of evolutionary biology. We have modeled the generic behaviors of gene regulatory networks in isogenic cells as stochastic nonlinear dynamical systems—coupled Boolean networks with perturbation. Model simulations under a variety of dynamical regimes suggest that the central characteristic of multicellularity, permanent spatial differentiation (diversification), indeed can arise. Additionally, we observe that diversification is more likely to occur near the critical regime of Lyapunov stability.
Shamshirband, Shahaboddin; Banjanovic-Mehmedovic, Lejla; Bosankic, Ivan; Kasapovic, Suad; Abdul Wahab, Ainuddin Wahid Bin
2016-01-01
Intelligent Transportation Systems rely on understanding, predicting and affecting the interactions between vehicles. The goal of this paper is to choose a small subset from the larger set so that the resulting regression model is simple, yet have good predictive ability for Vehicle agent speed relative to Vehicle intruder. The method of ANFIS (adaptive neuro fuzzy inference system) was applied to the data resulting from these measurements. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of agent speed relative to intruder. This process includes several ways to discover a subset of the total set of recorded parameters, showing good predictive capability. The ANFIS network was used to perform a variable search. Then, it was used to determine how 9 parameters (Intruder Front sensors active (boolean), Intruder Rear sensors active (boolean), Agent Front sensors active (boolean), Agent Rear sensors active (boolean), RSSI signal intensity/strength (integer), Elapsed time (in seconds), Distance between Agent and Intruder (m), Angle of Agent relative to Intruder (angle between vehicles °), Altitude difference between Agent and Intruder (m)) influence prediction of agent speed relative to intruder. The results indicated that distance between Vehicle agent and Vehicle intruder (m) and angle of Vehicle agent relative to Vehicle Intruder (angle between vehicles °) is the most influential parameters to Vehicle agent speed relative to Vehicle intruder.
Harris, Daniel R.; Henderson, Darren W.; Kavuluru, Ramakanth; Stromberg, Arnold J.; Johnson, Todd R.
2015-01-01
We present a custom, Boolean query generator utilizing common-table expressions (CTEs) that is capable of scaling with big datasets. The generator maps user-defined Boolean queries, such as those interactively created in clinical-research and general-purpose healthcare tools, into SQL. We demonstrate the effectiveness of this generator by integrating our work into the Informatics for Integrating Biology and the Bedside (i2b2) query tool and show that it is capable of scaling. Our custom generator replaces and outperforms the default query generator found within the Clinical Research Chart (CRC) cell of i2b2. In our experiments, sixteen different types of i2b2 queries were identified by varying four constraints: date, frequency, exclusion criteria, and whether selected concepts occurred in the same encounter. We generated non-trivial, random Boolean queries based on these 16 types; the corresponding SQL queries produced by both generators were compared by execution times. The CTE-based solution significantly outperformed the default query generator and provided a much more consistent response time across all query types (M=2.03, SD=6.64 vs. M=75.82, SD=238.88 seconds). Without costly hardware upgrades, we provide a scalable solution based on CTEs with very promising empirical results centered on performance gains. The evaluation methodology used for this provides a means of profiling clinical data warehouse performance. PMID:25192572
Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.
2010-01-01
This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238
Phased-mission system analysis using Boolean algebraic methods
NASA Technical Reports Server (NTRS)
Somani, Arun K.; Trivedi, Kishor S.
1993-01-01
Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
Specialty functions singularity mechanics problems
NASA Technical Reports Server (NTRS)
Sarigul, Nesrin
1989-01-01
The focus is in the development of more accurate and efficient advanced methods for solution of singular problems encountered in mechanics. At present, finite element methods in conjunction with special functions, boolean sum and blending interpolations are being considered. In dealing with systems which contain a singularity, special finite elements are being formulated to be used in singular regions. Further, special transition elements are being formulated to couple the special element to the mesh that models the rest of the system, and to be used in conjunction with 1-D, 2-D and 3-D elements within the same mesh. Computational simulation with a least squares fit is being utilized to construct special elements, if there is an unknown singularity in the system. A novel approach is taken in formulation of the elements in that: (1) the material properties are modified to include time, temperature, coordinate and stress dependant behavior within the element; (2) material properties vary at nodal points of the elements; (3) a hidden-symbolic computation scheme is developed and utilized in formulating the elements; and (4) special functions and boolean sum are utilized in order to interpolate the field variables and their derivatives along the boundary of the elements. It may be noted that the proposed methods are also applicable to fluids and coupled problems.
Evaluation of properties over phylogenetic trees using stochastic logics.
Requeno, José Ignacio; Colom, José Manuel
2016-06-14
Model checking has been recently introduced as an integrated framework for extracting information of the phylogenetic trees using temporal logics as a querying language, an extension of modal logics that imposes restrictions of a boolean formula along a path of events. The phylogenetic tree is considered a transition system modeling the evolution as a sequence of genomic mutations (we understand mutation as different ways that DNA can be changed), while this kind of logics are suitable for traversing it in a strict and exhaustive way. Given a biological property that we desire to inspect over the phylogeny, the verifier returns true if the specification is satisfied or a counterexample that falsifies it. However, this approach has been only considered over qualitative aspects of the phylogeny. In this paper, we repair the limitations of the previous framework for including and handling quantitative information such as explicit time or probability. To this end, we apply current probabilistic continuous-time extensions of model checking to phylogenetics. We reinterpret a catalog of qualitative properties in a numerical way, and we also present new properties that couldn't be analyzed before. For instance, we obtain the likelihood of a tree topology according to a mutation model. As case of study, we analyze several phylogenies in order to obtain the maximum likelihood with the model checking tool PRISM. In addition, we have adapted the software for optimizing the computation of maximum likelihoods. We have shown that probabilistic model checking is a competitive framework for describing and analyzing quantitative properties over phylogenetic trees. This formalism adds soundness and readability to the definition of models and specifications. Besides, the existence of model checking tools hides the underlying technology, omitting the extension, upgrade, debugging and maintenance of a software tool to the biologists. A set of benchmarks justify the feasibility of our approach.
Constructing Sample Space with Combinatorial Reasoning: A Mixed Methods Study
ERIC Educational Resources Information Center
McGalliard, William A., III.
2012-01-01
Recent curricular developments suggest that students at all levels need to be statistically literate and able to efficiently and accurately make probabilistic decisions. Furthermore, statistical literacy is a requirement to being a well-informed citizen of society. Research also recognizes that the ability to reason probabilistically is supported…
A probabilistic watershed-based framework was developed to encompass wadeable streams within all three ecoregions of West Virginia, with the exclusion noted below. In Phase I of the project (year 2001), we developed and applied a probabilistic watershed-based sampling framework ...
As part of the National Coastal Assessment, the Environmental Monitoring and Assessment Program of EPA is conducting a three year evaluation of benthic habitat condition of California estuaries. In 1999, probabilistic sampling for a variety of biotic and abiotic condition indica...
Sample path analysis of contribution and reward in cooperative groups.
Toyoizumi, Hiroshi
2009-02-07
Explaining cooperative behavior is one of the major challenges in both biology and human society. The individual reward in cooperative group depends on how we share the rewards in the group. Thus, the group size dynamics in a cooperative group and reward-allocation rule seem essential to evaluate the emergence of cooperative groups. We apply a sample path-based analysis called an extension of Little's formula to general cooperative group. We show that the expected reward is insensitive to the specific reward-allocation rule and probabilistic structure of group dynamics, and the simple productivity condition guarantees the expected reward to be larger than the average contribution. As an example, we take social queues to see the insensitivity result in detail.
Poisson-Like Spiking in Circuits with Probabilistic Synapses
Moreno-Bote, Rubén
2014-01-01
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705
ERIC Educational Resources Information Center
Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George
2012-01-01
In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Random Boolean networks for autoassociative memory: Optimization and sequential learning
NASA Astrophysics Data System (ADS)
Sherrington, D.; Wong, K. Y. M.
Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.
Security analysis of boolean algebra based on Zhang-Wang digital signature scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jinbin, E-mail: jbzheng518@163.com
2014-10-06
In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.
Realization of a quantum Hamiltonian Boolean logic gate on the Si(001):H surface.
Kolmer, Marek; Zuzak, Rafal; Dridi, Ghassen; Godlewski, Szymon; Joachim, Christian; Szymonski, Marek
2015-08-07
The design and construction of the first prototypical QHC (Quantum Hamiltonian Computing) atomic scale Boolean logic gate is reported using scanning tunnelling microscope (STM) tip-induced atom manipulation on an Si(001):H surface. The NOR/OR gate truth table was confirmed by dI/dU STS (Scanning Tunnelling Spectroscopy) tracking how the surface states of the QHC quantum circuit on the Si(001):H surface are shifted according to the input logical status.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Holzgrefe, Henry; Ferber, Georg; Champeroux, Pascal; Gill, Michael; Honda, Masaki; Greiter-Wilke, Andrea; Baird, Theodore; Meyer, Olivier; Saulnier, Muriel
2014-01-01
In vivo models have been required to demonstrate relative cardiac safety, but model sensitivity has not been systematically investigated. Cross-species and human translation of repolarization delay, assessed as QT/QTc prolongation, has not been compared employing common methodologies across multiple species and sites. Therefore, the accurate translation of repolarization results within and between preclinical species, and to man, remains problematic. Six pharmaceutical companies entered into an informal consortium designed to collect high-resolution telemetered data in multiple species (dog; n=34, cynomolgus; n=37, minipig; n=12, marmoset; n=14, guinea pig; n=5, and man; n=57). All animals received vehicle and varying doses of moxifloxacin (3-100 mg/kg, p.o.) with telemetered ECGs (≥500 Hz) obtained for 20-24h post-dose. Individual probabilistic QT-RR relationships were derived for each subject. The rate-correction efficacies of the individual (QTca) and generic correction formulae (Bazett, Fridericia, and Van de Water) were objectively assessed as the mean squared slopes of the QTc-RR relationships. Normalized moxifloxacin QTca responses (Veh Δ%/μM) were derived for 1h centered on the moxifloxacin Tmax. All QT-RR ranges demonstrated probabilistic uncertainty; slopes varied distinctly by species where dog and human exhibited the lowest QT rate-dependence, which was much steeper in the cynomolgus and guinea pig. Incorporating probabilistic uncertainty, the normalized QTca-moxifloxacin responses were similarly conserved across all species, including man. The current results provide the first unambiguous evidence that all preclinical in vivo repolarization assays, when accurately modeled and evaluated, yield results that are consistent with the conservation of moxifloxacin-induced QT prolongation across all common preclinical species. Furthermore, these outcomes are directly transferable across all species including man. The consortium results indicate that the implementation of standardized QTc data presentation, QTc reference cycle lengths, and rate-correction coefficients can markedly improve the concordance of preclinical and clinical outcomes in most preclinical species. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
All optical logic for optical pattern recognition and networking applications
NASA Astrophysics Data System (ADS)
Khoury, Jed
2017-05-01
In this paper, we propose architectures for the implementation 16 Boolean optical gates from two inputs using externally pumped phase- conjugate Michelson interferometer. Depending on the gate to be implemented, some require single stage interferometer and others require two stages interferometer. The proposed optical gates can be used in several applications in optical networks including, but not limited to, all-optical packet routers switching, and all-optical error detection. The optical logic gates can also be used in recognition of noiseless rotation and scale invariant objects such as finger prints for home land security applications.
Exploiting Data Missingness in Bayesian Network Modeling
NASA Astrophysics Data System (ADS)
Rodrigues de Morais, Sérgio; Aussem, Alex
This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.
Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan
2005-01-01
Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Probabilistic thinking and death anxiety: a terror management based study.
Hayslip, Bert; Schuler, Eric R; Page, Kyle S; Carver, Kellye S
2014-01-01
Terror Management Theory has been utilized to understand how death can change behavioral outcomes and social dynamics. One area that is not well researched is why individuals willingly engage in risky behavior that could accelerate their mortality. One method of distancing a potential life threatening outcome when engaging in risky behaviors is through stacking probability in favor of the event not occurring, termed probabilistic thinking. The present study examines the creation and psychometric properties of the Probabilistic Thinking scale in a sample of young, middle aged, and older adults (n = 472). The scale demonstrated adequate internal consistency reliability for each of the four subscales, excellent overall internal consistency, and good construct validity regarding relationships with measures of death anxiety. Reliable age and gender effects in probabilistic thinking were also observed. The relationship of probabilistic thinking as part of a cultural buffer against death anxiety is discussed, as well as its implications for Terror Management research.
Boolean gates on actin filaments
NASA Astrophysics Data System (ADS)
Siccardi, Stefano; Tuszynski, Jack A.; Adamatzky, Andrew
2016-01-01
Actin is a globular protein which forms long polar filaments in the eukaryotic cytoskeleton. Actin networks play a key role in cell mechanics and cell motility. They have also been implicated in information transmission and processing, memory and learning in neuronal cells. The actin filaments have been shown to support propagation of voltage pulses. Here we apply a coupled nonlinear transmission line model of actin filaments to study interactions between voltage pulses. To represent digital information we assign a logical TRUTH value to the presence of a voltage pulse in a given location of the actin filament, and FALSE to the pulse's absence, so that information flows along the filament with pulse transmission. When two pulses, representing Boolean values of input variables, interact, then they can facilitate or inhibit further propagation of each other. We explore this phenomenon to construct Boolean logical gates and a one-bit half-adder with interacting voltage pulses. We discuss implications of these findings on cellular process and technological applications.
PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.
Klarner, Hannes; Streck, Adam; Siebert, Heike
2017-03-01
The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Discrete dynamic modeling of cellular signaling networks.
Albert, Réka; Wang, Rui-Sheng
2009-01-01
Understanding signal transduction in cellular systems is a central issue in systems biology. Numerous experiments from different laboratories generate an abundance of individual components and causal interactions mediating environmental and developmental signals. However, for many signal transduction systems there is insufficient information on the overall structure and the molecular mechanisms involved in the signaling network. Moreover, lack of kinetic and temporal information makes it difficult to construct quantitative models of signal transduction pathways. Discrete dynamic modeling, combined with network analysis, provides an effective way to integrate fragmentary knowledge of regulatory interactions into a predictive mathematical model which is able to describe the time evolution of the system without the requirement for kinetic parameters. This chapter introduces the fundamental concepts of discrete dynamic modeling, particularly focusing on Boolean dynamic models. We describe this method step-by-step in the context of cellular signaling networks. Several variants of Boolean dynamic models including threshold Boolean networks and piecewise linear systems are also covered, followed by two examples of successful application of discrete dynamic modeling in cell biology.
Origins of Chaos in Autonomous Boolean Networks
NASA Astrophysics Data System (ADS)
Socolar, Joshua; Cavalcante, Hugo; Gauthier, Daniel; Zhang, Rui
2010-03-01
Networks with nodes consisting of ideal Boolean logic gates are known to display either steady states, periodic behavior, or an ultraviolet catastrophe where the number of logic-transition events circulating in the network per unit time grows as a power-law. In an experiment, non-ideal behavior of the logic gates prevents the ultraviolet catastrophe and may lead to deterministic chaos. We identify certain non-ideal features of real logic gates that enable chaos in experimental networks. We find that short-pulse rejection and the asymmetry between the logic states tends to engender periodic behavior. On the other hand, a memory effect termed ``degradation'' can generate chaos. Our results strongly suggest that deterministic chaos can be expected in a large class of experimental Boolean-like networks. Such devices may find application in a variety of technologies requiring fast complex waveforms or flat power spectra. The non-ideal effects identified here also have implications for the statistics of attractors in large complex networks.
Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele
2018-02-01
Restricted Boltzmann machines are described by the Gibbs measure of a bipartite spin glass, which in turn can be seen as a generalized Hopfield network. This equivalence allows us to characterize the state of these systems in terms of their retrieval capabilities, both at low and high load, of pure states. We study the paramagnetic-spin glass and the spin glass-retrieval phase transitions, as the pattern (i.e., weight) distribution and spin (i.e., unit) priors vary smoothly from Gaussian real variables to Boolean discrete variables. Our analysis shows that the presence of a retrieval phase is robust and not peculiar to the standard Hopfield model with Boolean patterns. The retrieval region becomes larger when the pattern entries and retrieval units get more peaked and, conversely, when the hidden units acquire a broader prior and therefore have a stronger response to high fields. Moreover, at low load retrieval always exists below some critical temperature, for every pattern distribution ranging from the Boolean to the Gaussian case.
Experimental Clocking of Nanomagnets with Strain for Ultralow Power Boolean Logic.
D'Souza, Noel; Salehi Fashami, Mohammad; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha
2016-02-10
Nanomagnetic implementations of Boolean logic have attracted attention because of their nonvolatility and the potential for unprecedented overall energy-efficiency. Unfortunately, the large dissipative losses that occur when nanomagnets are switched with a magnetic field or spin-transfer-torque severely compromise the energy-efficiency. Recently, there have been experimental reports of utilizing the Spin Hall effect for switching magnets, and theoretical proposals for strain induced switching of single-domain magnetostrictive nanomagnets, that might reduce the dissipative losses significantly. Here, we experimentally demonstrate, for the first time that strain-induced switching of single-domain magnetostrictive nanomagnets of lateral dimensions ∼200 nm fabricated on a piezoelectric substrate can implement a nanomagnetic Boolean NOT gate and steer bit information unidirectionally in dipole-coupled nanomagnet chains. On the basis of the experimental results with bulk PMN-PT substrates, we estimate that the energy dissipation for logic operations in a reasonably scaled system using thin films will be a mere ∼1 aJ/bit.
ERIC Educational Resources Information Center
Vahabi, Mandana
2010-01-01
Objective: To test whether the format in which women receive probabilistic information about breast cancer and mammography affects their comprehension. Methods: A convenience sample of 180 women received pre-assembled randomized packages containing a breast health information brochure, with probabilities presented in either verbal or numeric…
Attention as Inference: Selection Is Probabilistic; Responses Are All-or-None Samples
ERIC Educational Resources Information Center
Vul, Edward; Hanus, Deborah; Kanwisher, Nancy
2009-01-01
Theories of probabilistic cognition postulate that internal representations are made up of multiple simultaneously held hypotheses, each with its own probability of being correct (henceforth, "probability distributions"). However, subjects make discrete responses and report the phenomenal contents of their mind to be all-or-none states rather than…
As part of the National Coastal Assessment, the Environmental Monitoring and Assessment Program of EPA is conducting a six year evaluation of benthic habitat condition for coastal waters of the western U.S. In 1999, probabilistic sampling for a range of biotic and abiotic conditi...
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
2013-01-01
Introduction The prevalence of undernutrition, which is closely associated with socioeconomic and sanitation conditions, is often higher among indigenous than non-indigenous children in many countries. In Brazil, in spite of overall reductions in the prevalence of undernutrition in recent decades, the nutritional situation of indigenous children remains worrying. The First National Survey of Indigenous People’s Health and Nutrition in Brazil, conducted in 2008–2009, was the first study to evaluate a nationwide representative sample of indigenous peoples. This paper presents findings from this study on the nutritional status of indigenous children < 5 years of age in Brazil. Methods A multi-stage sampling was employed to obtain a representative sample of the indigenous population residing in villages in four Brazilian regions (North, Northeast, Central-West, and Southeast/South). Initially, a stratified probabilistic sampling was carried out for indigenous villages located in these regions. Households in sampled villages were selected by census or systematic sampling depending on the village population. The survey evaluated the health and nutritional status of children < 5 years, in addition to interviewing mothers or caretakers. Results Height and weight measurements were taken of 6,050 and 6,075 children, respectively. Prevalence rates of stunting, underweight, and wasting were 25.7%, 5.9%, and 1.3%, respectively. Even after controlling for confounding, the prevalence rates of underweight and stunting were higher among children in the North region, in low socioeconomic status households, in households with poorer sanitary conditions, with anemic mothers, with low birthweight, and who were hospitalized during the prior 6 months. A protective effect of breastfeeding for underweight was observed for children under 12 months. Conclusions The elevated rate of stunting observed in indigenous children approximates that of non-indigenous Brazilians four decades ago, before major health reforms greatly reduced its occurrence nationwide. Prevalence rates of undernutrition were associated with socioeconomic variables including income, household goods, schooling, and access to sanitation services, among other variables. Providing important baseline data for future comparison, these findings further suggest the relevance of social, economic, and environmental factors at different scales (local, regional, and national) for the nutritional status of indigenous peoples. PMID:23552397
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Rats bred for high alcohol drinking are more sensitive to delayed and probabilistic outcomes.
Wilhelm, C J; Mitchell, S H
2008-10-01
Alcoholics and heavy drinkers score higher on measures of impulsivity than nonalcoholics and light drinkers. This may be because of factors that predate drug exposure (e.g. genetics). This study examined the role of genetics by comparing impulsivity measures in ethanol-naive rats selectively bred based on their high [high alcohol drinking (HAD)] or low [low alcohol drinking (LAD)] consumption of ethanol. Replicates 1 and 2 of the HAD and LAD rats, developed by the University of Indiana Alcohol Research Center, completed two different discounting tasks. Delay discounting examines sensitivity to rewards that are delayed in time and is commonly used to assess 'choice' impulsivity. Probability discounting examines sensitivity to the uncertain delivery of rewards and has been used to assess risk taking and risk assessment. High alcohol drinking rats discounted delayed and probabilistic rewards more steeply than LAD rats. Discount rates associated with probabilistic and delayed rewards were weakly correlated, while bias was strongly correlated with discount rate in both delay and probability discounting. The results suggest that selective breeding for high alcohol consumption selects for animals that are more sensitive to delayed and probabilistic outcomes. Sensitivity to delayed or probabilistic outcomes may be predictive of future drinking in genetically predisposed individuals.
Spatiotemporal chaos of self-replicating spots in reaction-diffusion systems.
Wang, Hongli; Ouyang, Qi
2007-11-23
The statistical properties of self-replicating spots in the reaction-diffusion Gray-Scott model are analyzed. In the chaotic regime of the system, the spots that dominate the spatiotemporal chaos grow and divide in two or decay into the background randomly and continuously. The rates at which the spots are created and decay are observed to be linearly dependent on the number of spots in the system. We derive a probabilistic description of the spot dynamics based on the statistical independence of spots and thus propose a characterization of the spatiotemporal chaos dominated by replicating spots.
E-nose based rapid prediction of early mouldy grain using probabilistic neural networks
Ying, Xiaoguo; Liu, Wei; Hui, Guohua; Fu, Jun
2015-01-01
In this paper, early mouldy grain rapid prediction method using probabilistic neural network (PNN) and electronic nose (e-nose) was studied. E-nose responses to rice, red bean, and oat samples with different qualities were measured and recorded. E-nose data was analyzed using principal component analysis (PCA), back propagation (BP) network, and PNN, respectively. Results indicated that PCA and BP network could not clearly discriminate grain samples with different mouldy status and showed poor predicting accuracy. PNN showed satisfying discriminating abilities to grain samples with an accuracy of 93.75%. E-nose combined with PNN is effective for early mouldy grain prediction. PMID:25714125
Analytical probabilistic modeling of RBE-weighted dose for ion therapy.
Wieser, H P; Hennig, P; Wahl, N; Bangert, M
2017-11-10
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
Analytical probabilistic modeling of RBE-weighted dose for ion therapy
NASA Astrophysics Data System (ADS)
Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.
2017-12-01
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
A probabilistic and continuous model of protein conformational space for template-free modeling.
Zhao, Feng; Peng, Jian; Debartolo, Joe; Freed, Karl F; Sosnick, Tobin R; Xu, Jinbo
2010-06-01
One of the major challenges with protein template-free modeling is an efficient sampling algorithm that can explore a huge conformation space quickly. The popular fragment assembly method constructs a conformation by stringing together short fragments extracted from the Protein Data Base (PDB). The discrete nature of this method may limit generated conformations to a subspace in which the native fold does not belong. Another worry is that a protein with really new fold may contain some fragments not in the PDB. This article presents a probabilistic model of protein conformational space to overcome the above two limitations. This probabilistic model employs directional statistics to model the distribution of backbone angles and 2(nd)-order Conditional Random Fields (CRFs) to describe sequence-angle relationship. Using this probabilistic model, we can sample protein conformations in a continuous space, as opposed to the widely used fragment assembly and lattice model methods that work in a discrete space. We show that when coupled with a simple energy function, this probabilistic method compares favorably with the fragment assembly method in the blind CASP8 evaluation, especially on alpha or small beta proteins. To our knowledge, this is the first probabilistic method that can search conformations in a continuous space and achieves favorable performance. Our method also generated three-dimensional (3D) models better than template-based methods for a couple of CASP8 hard targets. The method described in this article can also be applied to protein loop modeling, model refinement, and even RNA tertiary structure prediction.
Scale Dependence of Spatiotemporal Intermittence of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Siddani, Ravi K.
2011-01-01
It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.
ERIC Educational Resources Information Center
Agus, Mirian; Penna, Maria Pietronilla; Peró-Cebollero, Maribel; Guàrdia-Olmos, Joan
2016-01-01
Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology…
Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams
1992-07-01
memories , where careful attention has been given to programming the memory management routines [Brace et al 19901. To extract maximum performance, it...OBDDs) represent Boolean functions as directed acyclic graphs. They form a canonical representation, making testing of functional properties such as...indicated 3 X X2 X3 f 000 0 0 01 0X22 0 10 0 0 11 1 d 1 0 0 0 X3 X 3X 1 01 1 1 10 0 - i"o11 10o 1 1 Figure 1: Truth Table and Decison Tree Repremmtatios
NASA Astrophysics Data System (ADS)
Willemse, Tim A. C.
We introduce the concept of consistent correlations for parameterised Boolean equation systems (PBESs), motivated largely by the laborious proofs of correctness required for most manipulations in this setting. Consistent correlations focus on relating the equations that occur in PBESs, rather than their solutions. For a fragment of PBESs, consistent correlations are shown to coincide with a recently introduced form of bisimulation. Finally, we show that bisimilarity on processes induces consistent correlations on PBESs encoding model checking problems. We apply our theory to two example manipulations from the literature.
A Parallel Approach in Computing Correlation Immunity up to Six Variables
2015-03-10
their nonlinearity is divisible by 4. Let CI(n, k) (respectively, BCI (n, k)) be the number of exact order k correlation im- mune, (respectively...further balanced) n-variable Boolean functions. The notations CI(n, k, d), BCI (n, k, d) restricts the previous count to degree d Boolean functions...Theorem 3. The following are true: (i) BCI (n, n, 0) = 0, CI(n, n, 0) = 2, CI(n, k, 1) = BCI (n, k, 1) = 2 ( n k+1 ) , 0 ≤ k ≤ n− 1. (ii) BCI (n, n− 2) = 2
On Weak and Strong 2k- bent Boolean Functions
2016-01-01
U.S.A. Email: pstanica@nps.edu Abstract—In this paper we introduce a sequence of discrete Fourier transforms and define new versions of bent...denotes the complex conjugate of z. An important tool in our analysis is the discrete Fourier transform , known in Boolean functions literature, as Walsh...Hadamard, or Walsh–Hadamard transform , which is the func- tion Wf : Fn2 → C, defined by Wf (u) = 2− n 2 ∑ x∈Vn (−1)f(x)⊕u·x. Any f ∈ Bn can be
Qubits and quantum Hamiltonian computing performances for operating a digital Boolean 1/2-adder
NASA Astrophysics Data System (ADS)
Dridi, Ghassen; Faizy Namarvar, Omid; Joachim, Christian
2018-04-01
Quantum Boolean (1 + 1) digits 1/2-adders are designed with 3 qubits for the quantum computing (Qubits) and 4 quantum states for the quantum Hamiltonian computing (QHC) approaches. Detailed analytical solutions are provided to analyse the time operation of those different 1/2-adder gates. QHC is more robust to noise than Qubits and requires about the same amount of energy for running its 1/2-adder logical operations. QHC is faster in time than Qubits but its logical output measurement takes longer.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios
NASA Astrophysics Data System (ADS)
McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.
2008-12-01
Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.
NASA Astrophysics Data System (ADS)
Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.
2014-12-01
Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
NASA Astrophysics Data System (ADS)
Bertin, Daniel
2017-02-01
An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.
A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie
Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less
Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations
Zhang, Yi; Ren, Jinchang; Jiang, Jianmin
2015-01-01
Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions. PMID:26089862
Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.
Zhang, Yi; Ren, Jinchang; Jiang, Jianmin
2015-01-01
Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
Depintor, Jidiene Dylese Presecatan; Bracher, Eduardo Sawaya Botelho; Cabral, Dayane Maia Costa; Eluf-Neto, José
2016-01-01
Chronic spinal pain, especially low-back pain and neck pain, is a leading cause of years of life with disability. The aim of the present study was to estimate the prevalence of chronic spinal pain among individuals aged 15 years or older and to identify the factors associated with it. Cross-sectional epidemiological study on a sample of the population of the city of São Paulo. Participants were selected using random probabilistic sampling and data were collected via face-to-face interviews. The Hospital Anxiety and Depression Scale (HADS), EuroQol-5D, Alcohol Use Disorders Identification Test (AUDIT), Fagerström test for nicotine dependence and Brazilian economic classification criteria were used. A total of 826 participants were interviewed. The estimated prevalence of chronic spinal pain was 22% (95% confidence interval, CI: 19.3-25.0%). The factors independently associated with chronic spinal pain were: female sex, age 30 years or older, schooling level of four years or less, symptoms compatible with anxiety and high physical exertion during the main occupation. Quality of life and self-rated health scores were significantly worse among individuals with chronic spinal pain. The prevalence of chronic spinal pain in this segment of the population of São Paulo was 22.0%. The factors independently associated with chronic pain were: female sex, age 30 years or older, low education, symptoms compatible with anxiety and physical exertion during the main occupation.
Modeling and simulation of reliability of unmanned intelligent vehicles
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.
2008-04-01
Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.
Probabilistic Metrology Attains Macroscopic Cloning of Quantum Clocks
NASA Astrophysics Data System (ADS)
Gendra, B.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.; Chiribella, G.
2014-12-01
It has recently been shown that probabilistic protocols based on postselection boost the performances of the replication of quantum clocks and phase estimation. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and state estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states when the performance of the simulation is measured by testing small groups of clones.
DISCOUNTING OF DELAYED AND PROBABILISTIC LOSSES OVER A WIDE RANGE OF AMOUNTS
Green, Leonard; Myerson, Joel; Oliveira, Luís; Chang, Seo Eun
2014-01-01
The present study examined delay and probability discounting of hypothetical monetary losses over a wide range of amounts (from $20 to $500,000) in order to determine how amount affects the parameters of the hyperboloid discounting function. In separate conditions, college students chose between immediate payments and larger, delayed payments and between certain payments and larger, probabilistic payments. The hyperboloid function accurately described both types of discounting, and amount of loss had little or no systematic effect on the degree of discounting. Importantly, the amount of loss also had little systematic effect on either the rate parameter or the exponent of the delay and probability discounting functions. The finding that the parameters of the hyperboloid function remain relatively constant across a wide range of amounts of delayed and probabilistic loss stands in contrast to the robust amount effects observed with delayed and probabilistic rewards. At the individual level, the degree to which delayed losses were discounted was uncorrelated with the degree to which probabilistic losses were discounted, and delay and probability loaded on two separate factors, similar to what is observed with delayed and probabilistic rewards. Taken together, these findings argue that although delay and probability discounting involve fundamentally different decision-making mechanisms, nevertheless the discounting of delayed and probabilistic losses share an insensitivity to amount that distinguishes it from the discounting of delayed and probabilistic gains. PMID:24745086
The Emergence of Probabilistic Reasoning in Very Young Infants: Evidence from 4.5- and 6-Month-Olds
ERIC Educational Resources Information Center
Denison, Stephanie; Reed, Christie; Xu, Fei
2013-01-01
How do people make rich inferences from such sparse data? Recent research has explored this inferential ability by investigating probabilistic reasoning in infancy. For example, 8- and 11-month-old infants can make inferences from samples to populations and vice versa (Denison & Xu, 2010a; Xu & Denison, 2009; Xu & Garcia, 2008a). The…
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
Logic circuits from zero forcing.
Burgarth, Daniel; Giovannetti, Vittorio; Hogben, Leslie; Severini, Simone; Young, Michael
We design logic circuits based on the notion of zero forcing on graphs; each gate of the circuits is a gadget in which zero forcing is performed. We show that such circuits can evaluate every monotone Boolean function. By using two vertices to encode each logical bit, we obtain universal computation. We also highlight a phenomenon of "back forcing" as a property of each function. Such a phenomenon occurs in a circuit when the input of gates which have been already used at a given time step is further modified by a computation actually performed at a later stage. Finally, we show that zero forcing can be also used to implement reversible computation. The model introduced here provides a potentially new tool in the analysis of Boolean functions, with particular attention to monotonicity. Moreover, in the light of applications of zero forcing in quantum mechanics, the link with Boolean functions may suggest a new directions in quantum control theory and in the study of engineered quantum spin systems. It is an open technical problem to verify whether there is a link between zero forcing and computation with contact circuits.
Boolean dynamics of genetic regulatory networks inferred from microarray time series data
Martin, Shawn; Zhang, Zhaoduo; Martino, Anthony; ...
2007-01-31
Methods available for the inference of genetic regulatory networks strive to produce a single network, usually by optimizing some quantity to fit the experimental observations. In this paper we investigate the possibility that multiple networks can be inferred, all resulting in similar dynamics. This idea is motivated by theoretical work which suggests that biological networks are robust and adaptable to change, and that the overall behavior of a genetic regulatory network might be captured in terms of dynamical basins of attraction. We have developed and implemented a method for inferring genetic regulatory networks for time series microarray data. Our methodmore » first clusters and discretizes the gene expression data using k-means and support vector regression. We then enumerate Boolean activation–inhibition networks to match the discretized data. In conclusion, the dynamics of the Boolean networks are examined. We have tested our method on two immunology microarray datasets: an IL-2-stimulated T cell response dataset and a LPS-stimulated macrophage response dataset. In both cases, we discovered that many networks matched the data, and that most of these networks had similar dynamics.« less
What do we gain with Probabilistic Flood Loss Models?
NASA Astrophysics Data System (ADS)
Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.
2015-12-01
The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
ERIC Educational Resources Information Center
Denison, Stephanie; Trikutam, Pallavi; Xu, Fei
2014-01-01
A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar…
Serrano-Ortega, Natalia; Frías-Osuna, Antonio; Recio-Gómez, Juan M; Del-Pino-Casado, Rafael
2015-11-01
To develop and validate a scale to measure caregiving dedication regarding activities of daily living in caregivers of dependent older people. Cross-sectional study. Primary Health Care (Andalusia, Spain). a probabilistic sample of 200 caregivers of older relatives from Córdoba, Spain. Content validation by experts, construct validity (by exploratory factor analysis), divergent validity and reliability (internal consistency, test-retest reliability and inter-observers reliability). Cronbach's alpha was 0.86. Intraclass Correlation Coefficient was 0.96 for test-retest reliability and 0.88 for inter-observers reliability. When the sample was divided in two groups according to perceived burden level (presence and absence), the perceived burden was significantly different in each group (P=.001). The factor analysis revealed one only factor that explained 64% of the variance. The scale allows a suitable measure of caregiving dedication regarding activities of daily living in caregivers of older people, because this scale allows a quickly, easy administration, is well accepted by caregivers, has acceptable psychometric results and includes the frequency of caregiving, the kind of attended need and the dependence level in each need. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.
Bell violation using entangled photons without the fair-sampling assumption.
Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton
2013-05-09
The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.
Emergence of spontaneous anticipatory hand movements in a probabilistic environment
Bruhn, Pernille
2013-01-01
In this article, we present a novel experimental approach to the study of anticipation in probabilistic cuing. We implemented a modified spatial cuing task in which participants made an anticipatory hand movement toward one of two probabilistic targets while the (x, y)-computer mouse coordinates of their hand movements were sampled. This approach allowed us to tap into anticipatory processes as they occurred, rather than just measuring their behavioral outcome through reaction time to the target. In different conditions, we varied the participants’ degree of certainty of the upcoming target position with probabilistic pre-cues. We found that participants initiated spontaneous anticipatory hand movements in all conditions, even when they had no information on the position of the upcoming target. However, participants’ hand position immediately before the target was affected by the degree of certainty concerning the target’s position. This modulation of anticipatory hand movements emerged rapidly in most participants as they encountered a constant probabilistic relation between a cue and an upcoming target position over the course of the experiment. Finally, we found individual differences in the way anticipatory behavior was modulated with an uncertain/neutral cue. Implications of these findings for probabilistic spatial cuing are discussed. PMID:23833694
Probabilistic Polling And Voting In The 2008 Presidential Election
Delavande, Adeline; Manski, Charles F.
2010-01-01
This article reports new empirical evidence on probabilistic polling, which asks persons to state in percent-chance terms the likelihood that they will vote and for whom. Before the 2008 presidential election, seven waves of probabilistic questions were administered biweekly to participants in the American Life Panel (ALP). Actual voting behavior was reported after the election. We find that responses to the verbal and probabilistic questions are well-aligned ordinally. Moreover, the probabilistic responses predict voting behavior beyond what is possible using verbal responses alone. The probabilistic responses have more predictive power in early August, and the verbal responses have more power in late October. However, throughout the sample period, one can predict voting behavior better using both types of responses than either one alone. Studying the longitudinal pattern of responses, we segment respondents into those who are consistently pro-Obama, consistently anti-Obama, and undecided/vacillators. Membership in the consistently pro- or anti-Obama group is an almost perfect predictor of actual voting behavior, while the undecided/vacillators group has more nuanced voting behavior. We find that treating the ALP as a panel improves predictive power: current and previous polling responses together provide more predictive power than do current responses alone. PMID:24683275
Term Dependence: Truncating the Bahadur Lazarsfeld Expansion.
ERIC Educational Resources Information Center
Losee, Robert M., Jr.
1994-01-01
Studies the performance of probabilistic information retrieval systems using differing statistical dependence assumptions when estimating the probabilities inherent in the retrieval model. Experimental results using the Bahadur Lazarsfeld expansion on the Cystic Fibrosis database are discussed that suggest that incorporating term dependence…
NASA Astrophysics Data System (ADS)
Gong, Weiwei; Zhou, Xu
2017-06-01
In Computer Science, the Boolean Satisfiability Problem(SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. SAT is one of the first problems that was proven to be NP-complete, which is also fundamental to artificial intelligence, algorithm and hardware design. This paper reviews the main algorithms of the SAT solver in recent years, including serial SAT algorithms, parallel SAT algorithms, SAT algorithms based on GPU, and SAT algorithms based on FPGA. The development of SAT is analyzed comprehensively in this paper. Finally, several possible directions for the development of the SAT problem are proposed.
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Jupp, David L. B.
1990-01-01
Geometric-optical discrete-element mathematical models for forest canopies have been developed using the Boolean logic and models of Serra. The geometric-optical approach is considered to be particularly well suited to describing the bidirectional reflectance of forest woodland canopies, where the concentration of leaf material within crowns and the resulting between-tree gaps make plane-parallel, radiative-transfer models inappropriate. The approach leads to invertible formulations, in which the spatial and directional variance provides the means for remote estimation of tree crown size, shape, and total cover from remotedly sensed imagery.
A Parallel Approach in Computing Correlation Immunity up to Six Variables
2015-07-24
nonlinearity is divisible by 4. Let CI(n, k) (respectively, BCI (n, k)) be the number of exact order k corre- lation immune, (respectively, further...balanced) n-variable Boolean functions. The notations CI(n, k, d), BCI (n, k, d) restricts the previous count to degree d Boolean functions. Theorem 3...The following are true: (i) BCI (n, n, 0) = 0, CI(n, n, 0) = 2, CI(n, k, 1) = BCI (n, k, 1) = 2 ( n k+1 ) , 0 ≤ k ≤ n− 1. (ii) BCI (n, n− 2) = 2 ( n n−1
Certification of ICI 1012 optical data storage tape
NASA Technical Reports Server (NTRS)
Howell, J. M.
1993-01-01
ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.
Quantum-like Modeling of Cognition
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2015-09-01
This paper begins with a historical review of the mutual influence of physics and psychology, from Freud's invention of psychic energy inspired by von Boltzmann' thermodynamics to the enrichment quantum physics gained from the side of psychology by the notion of complementarity (the invention of Niels Bohr who was inspired by William James), besides we consider the resonance of the correspondence between Wolfgang Pauli and Carl Jung in both physics and psychology. Then we turn to the problem of development of mathematical models for laws of thought starting with Boolean logic and progressing towards foundations of classical probability theory. Interestingly, the laws of classical logic and probability are routinely violated not only by quantum statistical phenomena but by cognitive phenomena as well. This is yet another common feature between quantum physics and psychology. In particular, cognitive data can exhibit a kind of the probabilistic interference effect. This similarity with quantum physics convinced a multi-disciplinary group of scientists (physicists, psychologists, economists, sociologists) to apply the mathematical apparatus of quantum mechanics to modeling of cognition. We illustrate this activity by considering a few concrete phenomena: the order and disjunction effects, recognition of ambiguous figures, categorization-decision making. In Appendix 1 we briefly present essentials of theory of contextual probability and a method of representations of contextual probabilities by complex probability amplitudes (solution of the ``inverse Born's problem'') based on a quantum-like representation algorithm (QLRA).
Sailem, Heba; Bousgouni, Vicky; Cooper, Sam; Bakal, Chris
2014-01-22
One goal of cell biology is to understand how cells adopt different shapes in response to varying environmental and cellular conditions. Achieving a comprehensive understanding of the relationship between cell shape and environment requires a systems-level understanding of the signalling networks that respond to external cues and regulate the cytoskeleton. Classical biochemical and genetic approaches have identified thousands of individual components that contribute to cell shape, but it remains difficult to predict how cell shape is generated by the activity of these components using bottom-up approaches because of the complex nature of their interactions in space and time. Here, we describe the regulation of cellular shape by signalling systems using a top-down approach. We first exploit the shape diversity generated by systematic RNAi screening and comprehensively define the shape space a migratory cell explores. We suggest a simple Boolean model involving the activation of Rac and Rho GTPases in two compartments to explain the basis for all cell shapes in the dataset. Critically, we also generate a probabilistic graphical model to show how cells explore this space in a deterministic, rather than a stochastic, fashion. We validate the predictions made by our model using live-cell imaging. Our work explains how cross-talk between Rho and Rac can generate different cell shapes, and thus morphological heterogeneity, in genetically identical populations.
Residential water demand with endogenous pricing: The Canadian Case
NASA Astrophysics Data System (ADS)
Reynaud, Arnaud; Renzetti, Steven; Villeneuve, Michel
2005-11-01
In this paper, we show that the rate structure endogeneity may result in a misspecification of the residential water demand function. We propose to solve this endogeneity problem by estimating a probabilistic model describing how water rates are chosen by local communities. This model is estimated on a sample of Canadian local communities. We first show that the pricing structure choice reflects efficiency considerations, equity concerns, and, in some cases, a strategy of price discrimination across consumers by Canadian communities. Hence estimating the residential water demand without taking into account the pricing structures' endogeneity leads to a biased estimation of price and income elasticities. We also demonstrate that the pricing structure per se plays a significant role in influencing price responsiveness of Canadian residential consumers.
A generative, probabilistic model of local protein structure.
Boomsma, Wouter; Mardia, Kanti V; Taylor, Charles C; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas
2008-07-01
Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Pearce, Marcus T
2018-05-11
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Wood, K.V.; Nichols, J.D.; Percival, H.F.; Hines, J.E.
1998-01-01
During 1991-1993, we conducted capture-recapture studies on pig frogs, Rana grylio, in seven study locations in northcentral Florida. Resulting data were used to test hypotheses about variation in survival probability over different size-sex classes of pig frogs. We developed multistate capture-recapture models for the resulting data and used them to estimate survival rates and frog abundance. Tests provided strong evidence of survival differences among size-sex classes, with adult females showing the highest survival probabilities. Adult males and juvenile frogs had lower survival rates that were similar to each other. Adult females were more abundant than adult males in most locations at most sampling occasions. We recommended probabilistic capture-recapture models in general, and multistate models in particular, for robust estimation of demographic parameters in amphibian populations.
Effects of delay and probability combinations on discounting in humans
Cox, David J.; Dallery, Jesse
2017-01-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
Incorporating psychological influences in probabilistic cost analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kujawski, Edouard; Alvaro, Mariana; Edwards, William
2004-01-08
Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations thatmore » are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant.« less
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
Probabilistic population aging
2017-01-01
We merge two methodologies, prospective measures of population aging and probabilistic population forecasts. We compare the speed of change and variability in forecasts of the old age dependency ratio and the prospective old age dependency ratio as well as the same comparison for the median age and the prospective median age. While conventional measures of population aging are computed on the basis of the number of years people have already lived, prospective measures are computed also taking account of the expected number of years they have left to live. Those remaining life expectancies change over time and differ from place to place. We compare the probabilistic distributions of the conventional and prospective measures using examples from China, Germany, Iran, and the United States. The changes over time and the variability of the prospective indicators are smaller than those that are observed in the conventional ones. A wide variety of new results emerge from the combination of methodologies. For example, for Germany, Iran, and the United States the likelihood that the prospective median age of the population in 2098 will be lower than it is today is close to 100 percent. PMID:28636675
Modeling and controlling the two-phase dynamics of the p53 network: a Boolean network approach
NASA Astrophysics Data System (ADS)
Lin, Guo-Qiang; Ao, Bin; Chen, Jia-Wei; Wang, Wen-Xu; Di, Zeng-Ru
2014-12-01
Although much empirical evidence has demonstrated that p53 plays a key role in tumor suppression, the dynamics and function of the regulatory network centered on p53 have not yet been fully understood. Here, we develop a Boolean network model to reproduce the two-phase dynamics of the p53 network in response to DNA damage. In particular, we map the fates of cells into two types of Boolean attractors, and we find that the apoptosis attractor does not exist for minor DNA damage, reflecting that the cell is reparable. As the amount of DNA damage increases, the basin of the repair attractor shrinks, accompanied by the rising of the apoptosis attractor and the expansion of its basin, indicating that the cell becomes more irreparable with more DNA damage. For severe DNA damage, the repair attractor vanishes, and the apoptosis attractor dominates the state space, accounting for the exclusive fate of death. Based on the Boolean network model, we explore the significance of links, in terms of the sensitivity of the two-phase dynamics, to perturbing the weights of links and removing them. We find that the links are either critical or ordinary, rather than redundant. This implies that the p53 network is irreducible, but tolerant of small mutations at some ordinary links, and this can be interpreted with evolutionary theory. We further devised practical control schemes for steering the system into the apoptosis attractor in the presence of DNA damage by pinning the state of a single node or perturbing the weight of a single link. Our approach offers insights into understanding and controlling the p53 network, which is of paramount importance for medical treatment and genetic engineering.
Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J
2009-01-01
Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
A novel probabilistic framework for event-based speech recognition
NASA Astrophysics Data System (ADS)
Juneja, Amit; Espy-Wilson, Carol
2003-10-01
One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen
2009-03-01
The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.
230Th/U ages Supporting Hanford Site-Wide Probabilistic Seismic Hazard Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paces, James B.
This product represents a USGS Administrative Report that discusses samples and methods used to conduct uranium-series isotope analyses and resulting ages and initial 234U/238U activity ratios of pedogenic cements developed in several different surfaces in the Hanford area middle to late Pleistocene. Samples were collected and dated to provide calibration of soil development in surface deposits that are being used in the Hanford Site-Wide probabilistic seismic hazard analysis conducted by AMEC. The report includes description of sample locations and physical characteristics, sample preparation, chemical processing and mass spectrometry, analytical results, and calculated ages for individual sites. Ages of innermost rindsmore » on a number of samples from five sites in eastern Washington are consistent with a range of minimum depositional ages from 17 ka for cataclysmic flood deposits to greater than 500 ka for alluvium at several sites.« less
230Th/U ages Supporting Hanford Site‐Wide Probabilistic Seismic Hazard Analysis
Paces, James B.
2014-01-01
This product represents a USGS Administrative Report that discusses samples and methods used to conduct uranium-series isotope analyses and resulting ages and initial 234U/238U activity ratios of pedogenic cements developed in several different surfaces in the Hanford area middle to late Pleistocene. Samples were collected and dated to provide calibration of soil development in surface deposits that are being used in the Hanford Site-Wide probabilistic seismic hazard analysis conducted by AMEC. The report includes description of sample locations and physical characteristics, sample preparation, chemical processing and mass spectrometry, analytical results, and calculated ages for individual sites. Ages of innermost rinds on a number of samples from five sites in eastern Washington are consistent with a range of minimum depositional ages from 17 ka for cataclysmic flood deposits to greater than 500 ka for alluvium at several sites.
NASA Astrophysics Data System (ADS)
Furbish, D. J.; Roering, J. J.
2013-12-01
Recent discussions of local versus nonlocal sediment transport on hillslopes offer a lens for considering uncertainty in formulations of transport rates that are aimed at characterizing patchy, intermittent sediment motions in steeplands. Here we describe a general formulation for transport that is based on a convolution integral of the factors controlling the entrainment and disentrainment of sediment particles on a hillslope. In essence, such a formulation represents a ';flux' version of the Master equation, a general probabilistic (kinematic) formulation of mass conservation. As such, with the relevant physics invoked to represent entrainment and disentrainment, a nonlocal formulation quite happily accommodates local transport (and looks/behaves like a local formulation), as well as nonlocal transport, depending on the characteristic length scale of particle motions relative to the length scale at which the factors controlling particle transport are defined or measured. Nonetheless, nonlocal formulations of the sediment flux have mostly (but not entirely) outpaced experimental and field-based observations needed to inform the theory. At risk is bringing to bear a sophisticated mathematics that is not supported by our uncertain understanding of the processes involved. Experimental and field-based measurements of entrainment rates and particle travel distances are difficult to obtain, notably given the intermittency of many hillslope transport processes and the slow rates of change in hillslope morphology. A ';test' of a specific nonlocal formulation applied to hillslope evolution must therefore in part rest on consistency between measured hillslope configurations and predicted (i.e., modeled) hillslope configurations predicated on the proposed nonlocal formulation, assuming sufficient knowledge of initial and boundary conditions. On the other hand, because of its probabilistic basis, the formulation is in principle well suited to the task of describing transport relevant to geomorphic timescales -- in view of the stochastic nature of the transport processes occurring over these timescales and the uncertainty of our understanding of the physics involved. Moreover, in its basic form, the nonlocal formulation of the sediment flux is such that appropriate physics can be readily embedded within it as we learn more. And, the formulation is space-time averaged in a way that accommodates discontinuous (patchy, intermittent) sediment motions.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Prescribed burning impact on forest soil properties--a Fuzzy Boolean Nets approach.
Castro, Ana C Meira; Paulo Carvalho, Joao; Ribeiro, S
2011-02-01
The Portuguese northern forests are often and severely affected by wildfires during the Summer season. These occurrences significantly affect and negatively impact all ecosystems, namely soil, fauna and flora. In order to reduce the occurrences of natural wildfires, some measures to control the availability of fuel mass are regularly implemented. Those preventive actions concern mainly prescribed burnings and vegetation pruning. This work reports on the impact of a prescribed burning on several forest soil properties, namely pH, soil moisture, organic matter content and iron content, by monitoring the soil self-recovery capabilities during a one year span. The experiments were carried out in soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, Portugal, which was kept intact from prescribed burnings during a period of four years. Soil samples were collected from five plots at three different layers (0-3, 3-6 and 6-18) 1 day before prescribed fire and at regular intervals after the prescribed fire. This paper presents an approach where Fuzzy Boolean Nets (FBN) and Fuzzy reasoning are used to extract qualitative knowledge regarding the effect of prescribed fire burning on soil properties. FBN were chosen due to the scarcity on available quantitative data. The results showed that soil properties were affected by prescribed burning practice and were unable to recover their initial values after one year. Copyright © 2010 Elsevier Inc. All rights reserved.
On some methods for assessing earthquake predictions
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.; Peresan, A.
2017-09-01
A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.
Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses
Boyd, Oliver S.
2012-01-01
Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.
1983-09-01
al. (1981) was conducted on Copper City No. 2 tailings embankment damn near Miami, Arizona . Due to the extreme topographic relief in the area of the...mode of behavior and scale. ThiL dependency is summarized in the factor R. For example, circular shear instability as in a copper porphyry slope...OF THE PROBABILISTIC SLOPE STABILITY MODEL. . 32 6.1 DESCRIPTION OF COPPER CITY NUMBER 2 TAILINGS DAM . . 32 6.2 SUBSURFACE INVESTIGATION
Propagation of the velocity model uncertainties to the seismic event location
NASA Astrophysics Data System (ADS)
Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.
2015-01-01
Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew
2018-01-01
This study assessed the extent to which within-individual variation in schizotypy and paranormal belief influenced performance on probabilistic reasoning tasks. A convenience sample of 725 non-clinical adults completed measures assessing schizotypy (Oxford-Liverpool Inventory of Feelings and Experiences; O-Life brief), belief in the paranormal (Revised Paranormal Belief Scale; RPBS) and probabilistic reasoning (perception of randomness, conjunction fallacy, paranormal perception of randomness, and paranormal conjunction fallacy). Latent profile analysis (LPA) identified four distinct groups: class 1, low schizotypy and low paranormal belief (43.9% of sample); class 2, moderate schizotypy and moderate paranormal belief (18.2%); class 3, moderate schizotypy (high cognitive disorganization) and low paranormal belief (29%); and class 4, moderate schizotypy and high paranormal belief (8.9%). Identification of homogeneous classes provided a nuanced understanding of the relative contribution of schizotypy and paranormal belief to differences in probabilistic reasoning performance. Multivariate analysis of covariance revealed that groups with lower levels of paranormal belief (classes 1 and 3) performed significantly better on perception of randomness, but not conjunction problems. Schizotypy had only a negligible effect on performance. Further analysis indicated that framing perception of randomness and conjunction problems in a paranormal context facilitated performance for all groups but class 4. PMID:29434562
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew
2018-01-01
This study assessed the extent to which within-individual variation in schizotypy and paranormal belief influenced performance on probabilistic reasoning tasks. A convenience sample of 725 non-clinical adults completed measures assessing schizotypy (Oxford-Liverpool Inventory of Feelings and Experiences; O-Life brief), belief in the paranormal (Revised Paranormal Belief Scale; RPBS) and probabilistic reasoning (perception of randomness, conjunction fallacy, paranormal perception of randomness, and paranormal conjunction fallacy). Latent profile analysis (LPA) identified four distinct groups: class 1, low schizotypy and low paranormal belief (43.9% of sample); class 2, moderate schizotypy and moderate paranormal belief (18.2%); class 3, moderate schizotypy (high cognitive disorganization) and low paranormal belief (29%); and class 4, moderate schizotypy and high paranormal belief (8.9%). Identification of homogeneous classes provided a nuanced understanding of the relative contribution of schizotypy and paranormal belief to differences in probabilistic reasoning performance. Multivariate analysis of covariance revealed that groups with lower levels of paranormal belief (classes 1 and 3) performed significantly better on perception of randomness, but not conjunction problems. Schizotypy had only a negligible effect on performance. Further analysis indicated that framing perception of randomness and conjunction problems in a paranormal context facilitated performance for all groups but class 4.
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
A robust variable sampling time BLDC motor control design based upon μ-synthesis.
Hung, Chung-Wen; Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis
Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach. PMID:24327804
Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua
2017-01-15
The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
From samples to populations in retinex models
NASA Astrophysics Data System (ADS)
Gianini, Gabriele
2017-05-01
Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend in a qualitatively different way upon the features of the image.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Delavande, Adeline; Manski, Charles F
2010-01-01
This article reports new empirical evidence on probabilistic polling , which asks persons to state in percent-chance terms the likelihood that they will vote and for whom. Before the 2008 presidential election, seven waves of probabilistic questions were administered biweekly to participants in the American Life Panel (ALP). Actual voting behavior was reported after the election. We find that responses to the verbal and probabilistic questions are well-aligned ordinally. Moreover, the probabilistic responses predict voting behavior beyond what is possible using verbal responses alone. The probabilistic responses have more predictive power in early August, and the verbal responses have more power in late October. However, throughout the sample period, one can predict voting behavior better using both types of responses than either one alone. Studying the longitudinal pattern of responses, we segment respondents into those who are consistently pro-Obama , consistently anti-Obama , and undecided/vacillators . Membership in the consistently pro- or anti-Obama group is an almost perfect predictor of actual voting behavior, while the undecided/vacillators group has more nuanced voting behavior. We find that treating the ALP as a panel improves predictive power: current and previous polling responses together provide more predictive power than do current responses alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyadera, Takayuki; Imai, Hideki; Graduate School of Science and Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551
This paper discusses the no-cloning theorem in a logicoalgebraic approach. In this approach, an orthoalgebra is considered as a general structure for propositions in a physical theory. We proved that an orthoalgebra admits cloning operation if and only if it is a Boolean algebra. That is, only classical theory admits the cloning of states. If unsharp propositions are to be included in the theory, then a notion of effect algebra is considered. We proved that an atomic Archimedean effect algebra admitting cloning operation is a Boolean algebra. This paper also presents a partial result, indicating a relation between the cloningmore » on effect algebras and hidden variables.« less
Diagnostic reasoning techniques for selective monitoring
NASA Technical Reports Server (NTRS)
Homem-De-mello, L. S.; Doyle, R. J.
1991-01-01
An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.
1982-11-05
routines required by the Back End. 3.3 Detailed Functional Requirements 3.3.1 Front End 3.3.1.1 DRIVER The DRIVER is the primary user interface to the...Main 2. Exam ple" !.i ,, , ,vari able • id -: go for B Boolean Ai ’ A" ’ I type d 1 I , for Boolean I (from Standard) i I - - for A function i fuction ...TN in. If a TN cannot be allocated to the primary area of storage it needs(such as a register) it is allocated to the spill area reserved in the local
Nadkarni, P M
1997-08-01
Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.
A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information
NASA Technical Reports Server (NTRS)
Marchionini, Gary; Barlow, Diane
1994-01-01
An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Intrinsic noise and deviations from criticality in Boolean gene-regulatory networks
NASA Astrophysics Data System (ADS)
Villegas, Pablo; Ruiz-Franco, José; Hidalgo, Jorge; Muñoz, Miguel A.
2016-10-01
Gene regulatory networks can be successfully modeled as Boolean networks. A much discussed hypothesis says that such model networks reproduce empirical findings the best if they are tuned to operate at criticality, i.e. at the borderline between their ordered and disordered phases. Critical networks have been argued to lead to a number of functional advantages such as maximal dynamical range, maximal sensitivity to environmental changes, as well as to an excellent tradeoff between stability and flexibility. Here, we study the effect of noise within the context of Boolean networks trained to learn complex tasks under supervision. We verify that quasi-critical networks are the ones learning in the fastest possible way -even for asynchronous updating rules- and that the larger the task complexity the smaller the distance to criticality. On the other hand, when additional sources of intrinsic noise in the network states and/or in its wiring pattern are introduced, the optimally performing networks become clearly subcritical. These results suggest that in order to compensate for inherent stochasticity, regulatory and other type of biological networks might become subcritical rather than being critical, all the most if the task to be performed has limited complexity.
On the number of different dynamics in Boolean networks with deterministic update schedules.
Aracena, J; Demongeot, J; Fanchon, E; Montalva, M
2013-04-01
Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Boolean logic analysis for flow regime recognition of gas-liquid horizontal flow
NASA Astrophysics Data System (ADS)
Ramskill, Nicholas P.; Wang, Mi
2011-10-01
In order to develop a flowmeter for the accurate measurement of multiphase flows, it is of the utmost importance to correctly identify the flow regime present to enable the selection of the optimal method for metering. In this study, the horizontal flow of air and water in a pipeline was studied under a multitude of conditions using electrical resistance tomography but the flow regimes that are presented in this paper have been limited to plug and bubble air-water flows. This study proposes a novel method for recognition of the prevalent flow regime using only a fraction of the data, thus rendering the analysis more efficient. By considering the average conductivity of five zones along the central axis of the tomogram, key features can be identified, thus enabling the recognition of the prevalent flow regime. Boolean logic and frequency spectrum analysis has been applied for flow regime recognition. Visualization of the flow using the reconstructed images provides a qualitative comparison between different flow regimes. Application of the Boolean logic scheme enables a quantitative comparison of the flow patterns, thus reducing the subjectivity in the identification of the prevalent flow regime.
Probabilistic assessment method of the non-monotonic dose-responses-Part I: Methodological approach.
Chevillotte, Grégoire; Bernard, Audrey; Varret, Clémence; Ballet, Pascal; Bodin, Laurent; Roudot, Alain-Claude
2017-08-01
More and more studies aim to characterize non-monotonic dose response curves (NMDRCs). The greatest difficulty is to assess the statistical plausibility of NMDRCs from previously conducted dose response studies. This difficulty is linked to the fact that these studies present (i) few doses tested, (ii) a low sample size per dose, and (iii) the absence of any raw data. In this study, we propose a new methodological approach to probabilistically characterize NMDRCs. The methodology is composed of three main steps: (i) sampling from summary data to cover all the possibilities that may be presented by the responses measured by dose and to obtain a new raw database, (ii) statistical analysis of each sampled dose-response curve to characterize the slopes and their signs, and (iii) characterization of these dose-response curves according to the variation of the sign in the slope. This method allows characterizing all types of dose-response curves and can be applied both to continuous data and to discrete data. The aim of this study is to present the general principle of this probabilistic method which allows to assess the non-monotonic dose responses curves, and to present some results. Copyright © 2017 Elsevier Ltd. All rights reserved.
A probabilistic QMRA of Salmonella in direct agricultural reuse of treated municipal wastewater.
Amha, Yamrot M; Kumaraswamy, Rajkumari; Ahmad, Farrukh
2015-01-01
Developing reliable quantitative microbial risk assessment (QMRA) procedures aids in setting recommendations on reuse applications of treated wastewater. In this study, a probabilistic QMRA to determine the risk of Salmonella infections resulting from the consumption of edible crops irrigated with treated wastewater was conducted. Quantitative polymerase chain reaction (qPCR) was used to enumerate Salmonella spp. in post-disinfected samples, where they showed concentrations ranging from 90 to 1,600 cells/100 mL. The results were used to construct probabilistic exposure models for the raw consumption of three vegetables (lettuce, cabbage, and cucumber) irrigated with treated wastewater, and to estimate the disease burden using Monte Carlo analysis. The results showed elevated median disease burden, when compared with acceptable disease burden set by the World Health Organization, which is 10⁻⁶ disability-adjusted life years per person per year. Of the three vegetables considered, lettuce showed the highest risk of infection in all scenarios considered, while cucumber showed the lowest risk. The results of the Salmonella concentration obtained with qPCR were compared with the results of Escherichia coli concentration for samples taken on the same sampling dates.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
Collective dynamics in heterogeneous networks of neuronal cellular automata
NASA Astrophysics Data System (ADS)
Manchanda, Kaustubh; Bose, Amitabha; Ramaswamy, Ramakrishna
2017-12-01
We examine the collective dynamics of heterogeneous random networks of model neuronal cellular automata. Each automaton has b active states, a single silent state and r - b - 1 refractory states, and can show 'spiking' or 'bursting' behavior, depending on the values of b. We show that phase transitions that occur in the dynamical activity can be related to phase transitions in the structure of Erdõs-Rényi graphs as a function of edge probability. Different forms of heterogeneity allow distinct structural phase transitions to become relevant. We also show that the dynamics on the network can be described by a semi-annealed process and, as a result, can be related to the Boolean Lyapunov exponent.
Complex logic functions implemented with quantum dot bionanophotonic circuits.
Claussen, Jonathan C; Hildebrandt, Niko; Susumu, Kimihiro; Ancona, Mario G; Medintz, Igor L
2014-03-26
We combine quantum dots (QDs) with long-lifetime terbium complexes (Tb), a near-IR Alexa Fluor dye (A647), and self-assembling peptides to demonstrate combinatorial and sequential bionanophotonic logic devices that function by time-gated Förster resonance energy transfer (FRET). Upon excitation, the Tb-QD-A647 FRET-complex produces time-dependent photoluminescent signatures from multi-FRET pathways enabled by the capacitor-like behavior of the Tb. The unique photoluminescent signatures are manipulated by ratiometrically varying dye/Tb inputs and collection time. Fluorescent output is converted into Boolean logic states to create complex arithmetic circuits including the half-adder/half-subtractor, 2:1 multiplexer/1:2 demultiplexer, and a 3-digit, 16-combination keypad lock.
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I.; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S.; Henry, Roland G.
2013-01-01
Introduction Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. Methods We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm2) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. Results We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p < 0.001) and the probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). Discussion This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and intra-operative management decisions. PMID:24273719
Automated Database Schema Design Using Mined Data Dependencies.
ERIC Educational Resources Information Center
Wong, S. K. M.; Butz, C. J.; Xiang, Y.
1998-01-01
Describes a bottom-up procedure for discovering multivalued dependencies in observed data without knowing a priori the relationships among the attributes. The proposed algorithm is an application of technique designed for learning conditional independencies in probabilistic reasoning; a prototype system for automated database schema design has…
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
The value of prior knowledge in machine learning of complex network systems.
Ferranti, Dana; Krane, David; Craft, David
2017-11-15
Our overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available. The simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects'). Links to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd. dcraft@broadinstitute.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A Markov model of the Indus script
Rao, Rajesh P. N.; Yadav, Nisha; Vahia, Mayank N.; Joglekar, Hrishikesh; Adhikari, R.; Mahadevan, Iravatham
2009-01-01
Although no historical information exists about the Indus civilization (flourished ca. 2600–1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system. PMID:19666571
Sukumaran, Jeet; Knowles, L Lacey
2018-06-01
The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.
UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.
2012-01-01
UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.
An expert system design to diagnose cancer by using a new method reduced rule base.
Başçiftçi, Fatih; Avuçlu, Emre
2018-04-01
A Medical Expert System (MES) was developed which uses Reduced Rule Base to diagnose cancer risk according to the symptoms in an individual. A total of 13 symptoms were used. With the new MES, the reduced rules are controlled instead of all possibilities (2 13 = 8192 different possibilities occur). By controlling reduced rules, results are found more quickly. The method of two-level simplification of Boolean functions was used to obtain Reduced Rule Base. Thanks to the developed application with the number of dynamic inputs and outputs on different platforms, anyone can easily test their own cancer easily. More accurate results were obtained considering all the possibilities related to cancer. Thirteen different risk factors were determined to determine the type of cancer. The truth table produced in our study has 13 inputs and 4 outputs. The Boolean Function Minimization method is used to obtain less situations by simplifying logical functions. Diagnosis of cancer quickly thanks to control of the simplified 4 output functions. Diagnosis made with the 4 output values obtained using Reduced Rule Base was found to be quicker than diagnosis made by screening all 2 13 = 8192 possibilities. With the improved MES, more probabilities were added to the process and more accurate diagnostic results were obtained. As a result of the simplification process in breast and renal cancer diagnosis 100% diagnosis speed gain, in cervical cancer and lung cancer diagnosis rate gain of 99% was obtained. With Boolean function minimization, less number of rules is evaluated instead of evaluating a large number of rules. Reducing the number of rules allows the designed system to work more efficiently and to save time, and facilitates to transfer the rules to the designed Expert systems. Interfaces were developed in different software platforms to enable users to test the accuracy of the application. Any one is able to diagnose the cancer itself using determinative risk factors. Thereby likely to beat the cancer with early diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.
A model to assess the Mars Telecommunications Network relay robustness
NASA Technical Reports Server (NTRS)
Girerd, Andre R.; Meshkat, Leila; Edwards, Charles D., Jr.; Lee, Charles H.
2005-01-01
The relatively long mission durations and compatible radio protocols of current and projected Mars orbiters have enabled the gradual development of a heterogeneous constellation providing proximity communication services for surface assets. The current and forecasted capability of this evolving network has reached the point that designers of future surface missions consider complete dependence on it. Such designers, along with those architecting network requirements, have a need to understand the robustness of projected communication service. A model has been created to identify the robustness of the Mars Network as a function of surface location and time. Due to the decade-plus time horizon considered, the network will evolve, with emerging productive nodes and nodes that cease or fail to contribute. The model is a flexible framework to holistically process node information into measures of capability robustness that can be visualized for maximum understanding. Outputs from JPL's Telecom Orbit Analysis Simulation Tool (TOAST) provide global telecom performance parameters for current and projected orbiters. Probabilistic estimates of orbiter fuel life are derived from orbit keeping burn rates, forecasted maneuver tasking, and anomaly resolution budgets. Orbiter reliability is estimated probabilistically. A flexible scheduling framework accommodates the projected mission queue as well as potential alterations.
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
Quantum Inference on Bayesian Networks
NASA Astrophysics Data System (ADS)
Yoder, Theodore; Low, Guang Hao; Chuang, Isaac
2014-03-01
Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Syed Ali, M; Vadivel, R; Saravanakumar, R
2018-06-01
This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Multiscale/Multifunctional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A multilevel (multiscale/multifunctional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
Fusar-Poli, P; Schultze-Lutter, F
2016-02-01
Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes' theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Tempelman, L A; Hammer, D A
1994-01-01
The physiological function of many cells is dependent on their ability to adhere via receptors to ligand-coated surfaces under fluid flow. We have developed a model experimental system to measure cell adhesion as a function of cell and surface chemistry and fluid flow. Using a parallel-plate flow chamber, we measured the binding of rat basophilic leukemia cells preincubated with anti-dinitrophenol IgE antibody to polyacrylamide gels covalently derivatized with 2,4-dinitrophenol. The rat basophilic leukemia cells' binding behavior is binary: cells are either adherent or continue to travel at their hydrodynamic velocity, and the transition between these two states is abrupt. The spatial location of adherent cells shows cells can adhere many cell diameters down the length of the gel, suggesting that adhesion is a probabilistic process. The majority of experiments were performed in the excess ligand limit in which adhesion depends strongly on the number of receptors but weakly on ligand density. Only 5-fold changes in IgE surface density or in shear rate were necessary to change adhesion from complete to indistinguishable from negative control. Adhesion showed a hyperbolic dependence on shear rate. By performing experiments with two IgE-antigen configurations in which the kinetic rates of receptor-ligand binding are different, we demonstrate that the forward rate of reaction of the receptor-ligand pair is more important than its thermodynamic affinity in the regulation of binding under hydrodynamic flow. In fact, adhesion increases with increasing receptor-ligand reaction rate or decreasing shear rate, and scales with a single dimensionless parameter which compares the relative rates of reaction to fluid shear. Images FIGURE 2 FIGURE 3 FIGURE 6 FIGURE 8 FIGURE 10 PMID:8038394
Effects of delay and probability combinations on discounting in humans.
Cox, David J; Dallery, Jesse
2016-10-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.
2012-08-01
growth rates as well as the variability in the same, in the + titanium alloy, Ti-6Al-2Sn-4Zr-6Mo (Ti- 6 -2- 4 - 6 ) was studied at 260°C. A probabilistic...were obtained in a separate study on the effect of R on the small-crack growth regime in another + titanium alloy, Ti- 6 - 4 [32]. Given that crack...microstructure of Ti-6Al-2Sn-4Zr-6Mo (Ti- 6 -2- 4 - 6 ) at 260°C with particular emphasis on incorporating small-crack data into probabilistic life prediction
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
NASA Astrophysics Data System (ADS)
Bezruczko, N.; Fatani, S. S.
2010-07-01
Social researchers commonly compute ordinal raw scores and ratings to quantify human aptitudes, attitudes, and abilities but without a clear understanding of their limitations for scientific knowledge. In this research, common ordinal measures were compared to higher order linear (equal interval) scale measures to clarify implications for objectivity, precision, ontological coherence, and meaningfulness. Raw score gains, residualized raw gains, and linear gains calculated with a Rasch model were compared between Time 1 and Time 2 for observations from two early childhood learning assessments. Comparisons show major inconsistencies between ratings and linear gains. When gain distribution was dense, relatively compact, and initial status near item mid-range, linear measures and ratings were indistinguishable. When Time 1 status was distributed more broadly and magnitude of change variable, ratings were unrelated to linear gain, which emphasizes problematic implications of ordinal measures. Surprisingly, residualized gain scores did not significantly improve ordinal measurement of change. In general, raw scores and ratings may be meaningful in specific samples to establish order and high/low rank, but raw score differences suffer from non-uniform units. Even meaningfulness of sample comparisons, as well as derived proportions and percentages, are seriously affected by rank order distortions and should be avoided.
Barcoding of live human PBMC for multiplexed mass cytometry*
Mei, Henrik E.; Leipold, Michael D.; Schulz, Axel Ronald; Chester, Cariad; Maecker, Holden T.
2014-01-01
Mass cytometry is developing as a means of multiparametric single cell analysis. Here, we present an approach to barcoding separate live human PBMC samples for combined preparation and acquisition on a CyTOF® instrument. Using six different anti-CD45 antibody (Ab) conjugates labeled with Pd104, Pd106, Pd108, Pd110, In113, and In115, respectively, we barcoded up to 20 samples with unique combinations of exactly three different CD45 Ab tags. Cell events carrying more than or less than three different tags were excluded from analyses during Boolean data deconvolution, allowing for precise sample assignment and the electronic removal of cell aggregates. Data from barcoded samples matched data from corresponding individually stained and acquired samples, at cell event recoveries similar to individual sample analyses. The approach greatly reduced technical noise and minimizes unwanted cell doublet events in mass cytometry data, and reduces wet work and antibody consumption. It also eliminates sample-to-sample carryover and the requirement of instrument cleaning between samples, thereby effectively reducing overall instrument runtime. Hence, CD45-barcoding facilitates accuracy of mass cytometric immunophenotyping studies, thus supporting biomarker discovery efforts, and should be applicable to fluorescence flow cytometry as well. PMID:25609839
Using Generic Data to Establish Dormancy Failure Rates
NASA Technical Reports Server (NTRS)
Reistle, Bruce
2014-01-01
Many hardware items are dormant prior to being operated. The dormant period might be especially long, for example during missions to the moon or Mars. In missions with long dormant periods the risk incurred during dormancy can exceed the active risk contribution. Probabilistic Risk Assessments (PRAs) need to account for the dormant risk contribution as well as the active contribution. A typical method for calculating a dormant failure rate is to multiply the active failure rate by a constant, the dormancy factor. For example, some practitioners use a heuristic and divide the active failure rate by 30 to obtain an estimate of the dormant failure rate. To obtain a more empirical estimate of the dormancy factor, this paper uses the recently updated database NPRD-2011 [1] to arrive at a set of distributions for the dormancy factor. The resulting dormancy factor distributions are significantly different depending on whether the item is electrical, mechanical, or electro-mechanical. Additionally, this paper will show that using a heuristic constant fails to capture the uncertainty of the possible dormancy factors.
The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments
NASA Astrophysics Data System (ADS)
Chen, Fajing; Jiao, Meiyan; Chen, Jing
2013-04-01
Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.
A probabilistic fatigue analysis of multiple site damage
NASA Technical Reports Server (NTRS)
Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.
1994-01-01
The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.
Hybrid Packet-Pheromone-Based Probabilistic Routing for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Kashkouli Nejad, Keyvan; Shawish, Ahmed; Jiang, Xiaohong; Horiguchi, Susumu
Ad-Hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Minimal configuration and quick deployment make Ad-Hoc networks suitable for emergency situations like natural disasters or military conflicts. The current Ad-Hoc networks can only support either high mobility or high transmission rate at a time because they employ static approaches in their routing schemes. However, due to the continuous expansion of the Ad-Hoc network size, node-mobility and transmission rate, the development of new adaptive and dynamic routing schemes has become crucial. In this paper we propose a new routing scheme to support high transmission rates and high node-mobility simultaneously in a big Ad-Hoc network, by combining a new proposed packet-pheromone-based approach with the Hint Based Probabilistic Protocol (HBPP) for congestion avoidance with dynamic path selection in packet forwarding process. Because of using the available feedback information, the proposed algorithm does not introduce any additional overhead. The extensive simulation-based analysis conducted in this paper indicates that the proposed algorithm offers small packet-latency and achieves a significantly higher delivery probability in comparison with the available Hint-Based Probabilistic Protocol (HBPP).
Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2000-01-01
This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.
Boolean function applied to Mimosa pudica movements.
De Luccia, Thiago Paes de Barros; Friedman, Pedro
2011-09-01
Seismonastic or thigmonastic movements of Mimosa pudica L. is mostly because of the fast loss of water from swollen motor cells, resulting in temporary collapse of cells and quick curvature in the parts where these cells are located. Because of this, the plant has been much studied since the 18th century, leading us to think about the classical binomial stimulus-response (action-reaction) when compared to animals. Mechanic and electrical stimuli were used to investigate the analogy of mimosa branch with an artificial neuron model and to observe the action potential propagation through the mimosa branch. Boolean function applied to the mimosa branch in analogy with an artificial neuron model is one of the peculiarities of our hypothesis.
Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Zhong, Y. P.; Deng, Y. F.
2013-12-21
Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.
Questions Revisited: A Close Examination of Calculus of Inference and Inquiry
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Koga, Dennis (Technical Monitor)
2003-01-01
In this paper I examine more closely the way in which probability theory, the calculus of inference, is derived from the Boolean lattice structure of logical assertions ordered by implication. I demonstrate how the duality between the logical conjunction and disjunction in Boolean algebra is lost when deriving the probability calculus. In addition, I look more closely at the other lattice identities to verify that they are satisfied by the probability calculus. Last, I look towards developing the calculus of inquiry demonstrating that there is a sum and product rule for the relevance measure as well as a Bayes theorem. Current difficulties in deriving the complete inquiry calculus will also be discussed.
Optical reversible programmable Boolean logic unit.
Chattopadhyay, Tanay
2012-07-20
Computing with reversibility is the only way to avoid dissipation of energy associated with bit erase. So, a reversible microprocessor is required for future computing. In this paper, a design of a simple all-optical reversible programmable processor is proposed using a polarizing beam splitter, liquid crystal-phase spatial light modulators, a half-wave plate, and plane mirrors. This circuit can perform 16 logical operations according to three programming inputs. Also, inputs can be easily recovered from the outputs. It is named the "reversible programmable Boolean logic unit (RPBLU)." The logic unit is the basic building block of many complex computational operations. Hence the design is important in sense. Two orthogonally polarized lights are defined here as two logical states, respectively.
A sparse matrix algorithm on the Boolean vector machine
NASA Technical Reports Server (NTRS)
Wagner, Robert A.; Patrick, Merrell L.
1988-01-01
VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.
Extending Clause Learning of SAT Solvers with Boolean Gröbner Bases
NASA Astrophysics Data System (ADS)
Zengler, Christoph; Küchlin, Wolfgang
We extend clause learning as performed by most modern SAT Solvers by integrating the computation of Boolean Gröbner bases into the conflict learning process. Instead of learning only one clause per conflict, we compute and learn additional binary clauses from a Gröbner basis of the current conflict. We used the Gröbner basis engine of the logic package Redlog contained in the computer algebra system Reduce to extend the SAT solver MiniSAT with Gröbner basis learning. Our approach shows a significant reduction of conflicts and a reduction of restarts and computation time on many hard problems from the SAT 2009 competition.
Multi-Scale/Multi-Functional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN program RANDOM2 is presented in the form of a user's manual. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Details of the theoretical background, input data instructions, and a sample problem illustrating the use of the program are included.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
Ambiguity and Uncertainty in Probabilistic Inference.
1983-09-01
whether one was to judge the like- lihood that the majority or minority position was true . In order to sample a wide range of values of n and p, 40...AFD-A133 418 AMBIGUITY AND UNCERTAINTY IN PROBABILISTIC INFERENCE i/i U CLRS (U) CHICGO UNIT’ IL CENTER FOR DECISION RESERCH H J EINHORN ET AL. SEP...been demonstrated experimentally (Becker & Brownson, 1964; Yates & Zukowski, 1976). On the other hand, the process by which such second-order uncertainty
Probabilistic methods for rotordynamics analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.