Methods of information theory and algorithmic complexity for network biology.
Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper
2016-03-01
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
NASA Astrophysics Data System (ADS)
Bamber, D.; Goodman, I. R.; Torrez, William C.; Nguyen, H. T.
2001-08-01
Conditional probability logics (CPL's), such as Adams', while producing many satisfactory results, do not agree with commonsense reasoning for a number of key entailment schemes, including transitivity and contraposition. Also, CPL's and bayesian techniques, often: (1) use restrictive independence/simplification assumptions; (2) lack a rationale behind choice of prior distribution; (3) require highly complex implementation calculations; (4) introduce ad hoc techniques. To address the above difficulties, a new CPL is being developed: CRANOF - Complexity Reducing Algorithm for Near Optimal Fusion -based upon three factors: (i) second order probability logic (SOPL), i.e., probability of probabilities within a bayesian framework; (ii) justified use of Dirichlet family priors, based on an extension of Lukacs' characterization theorem; and (iii) replacement of the theoretical optimal solution by a near optimal one where the complexity of computations is reduced significantly. A fundamental application of CRANOF to correlation and tracking is provided here through a generic example in a form similar to transitivity: two track histories are to be merged or left alone, based upon observed kinematic and non-kinematic attribute information and conditional probabilities connecting the observed data to the degrees of matching of attributes, as well as relating the matching of prescribed groups of attributes from each track history to the correlation level between the histories.
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Algorithmic complexity and entanglement of quantum states.
Mora, Caterina E; Briegel, Hans J
2005-11-11
We define the algorithmic complexity of a quantum state relative to a given precision parameter, and give upper bounds for various examples of states. We also establish a connection between the entanglement of a quantum state and its algorithmic complexity.
Information complexity of neural networks.
Kon, M A; Plaskota, L
2000-04-01
This paper studies the question of lower bounds on the number of neurons and examples necessary to program a given task into feed forward neural networks. We introduce the notion of information complexity of a network to complement that of neural complexity. Neural complexity deals with lower bounds for neural resources (numbers of neurons) needed by a network to perform a given task within a given tolerance. Information complexity measures lower bounds for the information (i.e. number of examples) needed about the desired input-output function. We study the interaction of the two complexities, and so lower bounds for the complexity of building and then programming feed-forward nets for given tasks. We show something unexpected a priori--the interaction of the two can be simply bounded, so that they can be studied essentially independently. We construct radial basis function (RBF) algorithms of order n3 that are information-optimal, and give example applications.
A novel complex valued cuckoo search algorithm.
Zhou, Yongquan; Zheng, Hongqing
2013-01-01
To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified.
Systolic systems: algorithms and complexity
Chang, J.H.
1986-01-01
This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.
Sequence comparisons via algorithmic mutual information.
Milosavljević, A
1994-01-01
One of the main problems in DNA and protein sequence comparisons is to decide whether observed similarity of two sequences should be explained by their relatedness or by mere presence of some shared internal structure, e.g., shared internal tandem repeats. The standard methods that are based on statistics or classical information theory can be used to discover either internal structure or mutual sequence similarity, but cannot take into account both. Consequently, currently used methods for sequence comparison employ "masking" techniques that simply eliminate sequences that exhibit internal repetitive structure prior to sequence comparisons. The "masking" approach precludes discovery of homologous sequences of moderate or low complexity, which abound at both DNA and protein levels. As a solution to this problem, we propose a general method that is based on algorithmic information theory and minimal length encoding. We show that algorithmic mutual information factors out the sequence similarity that is due to shared internal structure and thus enables discovery of truly related sequences. We extend that recently developed algorithmic significance method (Milosavljević & Jurka 1993) to show that significance depends exponentially on algorithmic mutual information.
Pinning impulsive control algorithms for complex network.
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-01
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Pinning impulsive control algorithms for complex network
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-15
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Efficiency of financial markets and algorithmic complexity
NASA Astrophysics Data System (ADS)
Giglio, R.; da Silva, S.; Gleria, Iram; Ranciaro, A.; Matsushita, R.; Figueiredo, A.
2010-09-01
In this work we are interested in the concept of market efficiency and its relationship with the algorithmic complexity theory. We employ a methodology based on the Lempel-Ziv index to analyze the relative efficiency of high-frequency data coming from the Brazilian stock market.
Unifying Complexity and Information
NASA Astrophysics Data System (ADS)
Ke, Da-Guan
2013-04-01
Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.
Information Complexity and Biology
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Bignone, Franco A.; Cecconi, Fabio; Politi, Antonio
Kolmogorov contributed directly to Biology in essentially three problems: the analysis of population dynamics (Lotka-Volterra equations), the reaction-diffusion formulation of gene spreading (FKPP equation), and some discussions about Mendel's laws. However, the widely recognized importance of his contribution arises from his work on algorithmic complexity. In fact, the limited direct intervention in Biology reflects the generally slow growth of interest of mathematicians towards biological issues. From the early work of Vito Volterra on species competition, to the slow growth of dynamical systems theory, contributions to the study of matter and the physiology of the nervous system, the first 50-60 years have witnessed important contributions, but as scattered pieces apparently uncorrelated, and in branches often far away from Biology. Up to the 40' it is hard to see the initial loose build up of a convergence, for those theories that will become mainstream research by the end of the century, and connected by the study of biological systems per-se.
Information communication on complex networks
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Kawamoto, Hiroki; Maruyama, Takahiro; Morioka, Atsushi; Naganuma, Yuki
2013-02-01
Since communication networks such as the Internet, which is regarded as a complex network, have recently become a huge scale and a lot of data pass through them, the improvement of packet routing strategies for transport is one of the most significant themes in the study of computer networks. It is especially important to find routing strategies which can bear as many traffic as possible without congestion in complex networks. First, using neural networks, we introduce a strategy for packet routing on complex networks, where path lengths and queue lengths in nodes are taken into account within a framework of statistical physics. Secondly, instead of using shortest paths, we propose efficient paths which avoid hubs, nodes with a great many degrees, on scale-free networks with a weight of each node. We improve the heuristic algorithm proposed by Danila et. al. which optimizes step by step routing properties on congestion by using the information of betweenness, the probability of paths passing through a node in all optimal paths which are defined according to a rule, and mitigates the congestion. We confirm the new heuristic algorithm which balances traffic on networks by achieving minimization of the maximum betweenness in much smaller number of iteration steps. Finally, We model virus spreading and data transfer on peer-to-peer (P2P) networks. Using mean-field approximation, we obtain an analytical formulation and emulate virus spreading on the network and compare the results with those of simulation. Moreover, we investigate the mitigation of information traffic congestion in the P2P networks.
Algorithmic complexity in the minority game
Mansilla
2000-10-01
In this paper, we present our approach for the study of the complexity of Minority Game using tools from thermodynamics and statistical physics. Previous attempts were based on the behavior of volatility, an observable of the financial markets. Our approach focuses on some properties of the binary stream of outcomes of the game. Physical complexity, a magnitude rooted in Kolmogorov-Chaitin theory, allows us to explain some properties of collective behavior of the agents. Mutual information function, a measure related to Shannon's information entropy, was useful to observe a kind of phase transition when applied to the binary string of the whole history of the game.
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
Advanced Algorithms for Local Routing Strategy on Complex Networks.
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
C. elegans locomotion analysis using algorithmic information theory.
Skandari, Roghieh; Le Bihan, Nicolas; Manton, Jonathan H
2015-01-01
This article investigates the use of algorithmic information theory to analyse C. elegans datasets. The ability of complexity measures to detect similarity in animals' behaviours is demonstrated and their strengths are compared to methods such as histograms. Introduced quantities are illustrated on a couple of real two-dimensional C. elegans datasets to investigate the thermotaxis and chemotaxis behaviours.
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
Accessing complexity from genome information
NASA Astrophysics Data System (ADS)
Tenreiro Machado, J. A.
2012-06-01
This paper studies the information content of the chromosomes of 24 species. In a first phase, a scheme inspired in dynamical system state space representation is developed. For each chromosome the state space dynamical evolution is shed into a two dimensional chart. The plots are then analyzed and characterized in the perspective of fractal dimension. This information is integrated in two measures of the species' complexity addressing its average and variability. The results are in close accordance with phylogenetics pointing quantitative aspects of the species' genomic complexity.
Entropy, complexity, and spatial information
NASA Astrophysics Data System (ADS)
Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril
2014-10-01
We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.
Entropy, complexity, and spatial information
NASA Astrophysics Data System (ADS)
Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril
2014-09-01
We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.
A Simple Quality Triangulation Algorithm for Complex Geometries
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper presents a new and simple algorithm for quality triangulation in complex geometries. The proposed algorithm is based on an initial equilateral triangle mesh covering the whole domain. The mesh nodes close to the boundary edges satisfy the so-called non-encroaching criterion: the distance ...
Adaptive clustering algorithm for community detection in complex networks.
Ye, Zhenqing; Hu, Songnian; Yu, Jun
2008-10-01
Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501
Algorithmic complexity of real financial markets
NASA Astrophysics Data System (ADS)
Mansilla, R.
2001-12-01
A new approach to the understanding of complex behavior of financial markets index using tools from thermodynamics and statistical physics is developed. Physical complexity, a quantity rooted in the Kolmogorov-Chaitin theory is applied to binary sequences built up from real time series of financial markets indexes. The study is based on NASDAQ and Mexican IPC data. Different behaviors of this quantity are shown when applied to the intervals of series placed before crashes and to intervals when no financial turbulence is observed. The connection between our results and the efficient market hypothesis is discussed.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Distributed learning automata-based algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-03-01
Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.
NASA Astrophysics Data System (ADS)
Zhang, Xian-Kun; Tian, Xue; Li, Ya-Nan; Song, Chen
2014-08-01
The label propagation algorithm (LPA) is a graph-based semi-supervised learning algorithm, which can predict the information of unlabeled nodes by a few of labeled nodes. It is a community detection method in the field of complex networks. This algorithm is easy to implement with low complexity and the effect is remarkable. It is widely applied in various fields. However, the randomness of the label propagation leads to the poor robustness of the algorithm, and the classification result is unstable. This paper proposes a LPA based on edge clustering coefficient. The node in the network selects a neighbor node whose edge clustering coefficient is the highest to update the label of node rather than a random neighbor node, so that we can effectively restrain the random spread of the label. The experimental results show that the LPA based on edge clustering coefficient has made improvement in the stability and accuracy of the algorithm.
NASA Astrophysics Data System (ADS)
Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng
2016-02-01
The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimization problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A cloud detection algorithm using edge detection and information entropy over urban area
NASA Astrophysics Data System (ADS)
Zheng, Hong; Wen, Tianxiao; Li, Zhen
2013-10-01
Aiming at detecting cloud interference over urban area, an algorithm in this research is proposed to detect urban cloud area combining extracting edge information with information entropy, focusing on distinguishing complex surface features accurately to retain intact surface information. Firstly, image edge sharpening is used. Secondly, Canny edge detector and closing operation are applied to extract and strengthen edge features. Thirdly, information entropy extraction is adopted to ensure cloud positional accuracy. Compared with traditional cloud detection methods, this algorithm protects the integrity of urban surface features efficiently, improving the segmentation accuracy. Test results prove the effectiveness of this algorithm.
Algorithmic complexity theory and the relative efficiency of financial markets
NASA Astrophysics Data System (ADS)
Giglio, R.; Matsushita, R.; Figueiredo, A.; Gleria, I.; Da Silva, S.
2008-11-01
Financial economists usually assess market efficiency in absolute terms. This is to be viewed as a shortcoming. One way of dealing with the relative efficiency of markets is to resort to the efficiency interpretation provided by algorithmic complexity theory. We employ such an approach in order to rank 36 stock exchanges and 20 US dollar exchange rates in terms of their relative efficiency.
Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding
Liu, Pengyu; Jia, Kebin
2013-01-01
A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495
Information dynamics algorithm for detecting communities in networks
NASA Astrophysics Data System (ADS)
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
2012-11-01
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
A Modified Tactile Brush Algorithm for Complex Touch Gestures
Ragan, Eric
2015-01-01
Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.
Routine Discovery of Complex Genetic Models using Genetic Algorithms
Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.
2010-01-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983
Routine Discovery of Complex Genetic Models using Genetic Algorithms.
Moore, Jason H; Hahn, Lance W; Ritchie, Marylyn D; Thornton, Tricia A; White, Bill C
2004-02-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes.
Complexity measurement based on information theory and kolmogorov complexity.
Lui, Leong Ting; Terrazas, Germán; Zenil, Hector; Alexander, Cameron; Krasnogor, Natalio
2015-01-01
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Constant-complexity stochastic simulation algorithm with optimal binning
NASA Astrophysics Data System (ADS)
Sanft, Kevin R.; Othmer, Hans G.
2015-08-01
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Information content of ozone retrieval algorithms
NASA Technical Reports Server (NTRS)
Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.
1989-01-01
The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.
A Generative Statistical Algorithm for Automatic Detection of Complex Postures
Amit, Yali; Biron, David
2015-01-01
This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258
Current Algorithms for the Diagnosis of wide QRS Complex Tachycardias
Vereckei, András
2014-01-01
The differential diagnosis of a regular, monomorphic wide QRS complex tachycardia (WCT) mechanism represents a great diagnostic dilemma commonly encountered by the practicing physician, which has important implications for acute arrhythmia management, further work-up, prognosis and chronic management as well. This comprehensive review discusses the causes and differential diagnosis of WCT, and since the ECG remains the cornerstone of WCT differential diagnosis, focuses on the application and diagnostic value of different ECG criteria and algorithms in this setting and also provides a practical clinical approach to patients with WCTs. PMID:24827795
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Toward the quality evaluation of complex information systems
NASA Astrophysics Data System (ADS)
Todoran, Ion-George; Lecornu, Laurent; Khenchaf, Ali; Le Caillec, Jean-Marc
2014-06-01
Recent technological evolutions and developments allow gathering huge amounts of data stemmed from different types of sensors, social networks, intelligence reports, distributed databases, etc. Data quantity and heterogeneity imposed the evolution necessity of the information systems. Nowadays the information systems are based on complex information processing techniques at multiple processing stages. Unfortunately, possessing large quantities of data and being able to implement complex algorithms do not guarantee that the extracted information will be of good quality. The decision-makers need good quality information in the process of decision-making. We insist that for a decision-maker the information and the information quality, viewed as a meta-information, are of great importance. A system not proposing to its user the information quality is in danger of not being correctly used or in more dramatic cases not to be used at all. In literature, especially in organizations management and in information retrieval, can be found some information quality evaluation methodologies. But none of these do not allow the information quality evaluation in complex and changing environments. We propose a new information quality methodology capable of estimating the information quality dynamically with data changes and/or with the information system inner changes. Our methodology is able to instantaneously update the system's output quality. For capturing the information quality changes through the system, we introduce the notion of quality transfer function. It is equivalent to the signal processing transfer function but working on the quality level. The quality transfer function describes the influence of a processing module over the information quality. We also present two different views over the notion of information quality: a global one, characterizing the entire system and a local one, for each processing module.
Integrating a priori information in edge-linking algorithms
NASA Astrophysics Data System (ADS)
Farag, Aly A.; Cao, Yu; Yeap, Yuen-Pin
1992-09-01
This research presents an approach to integrate a priori information to the path metric of the LINK algorithm. The zero-crossing contours of the $DEL2G are taken as a gross estimate of the boundaries in the image. This estimate of the boundaries is used to define the swath of important information, and to provide a distance measure for edge localization. During the linking process, a priori information plays important roles in (1) dramatically reducing the search space because the actual path lies within +/- 2 (sigma) f from the prototype contours ((sigma) f is the standard deviation of the Gaussian kernel used in the edge enhancement step); (2) breaking the ties when the search metrics give uncertain information; and (3) selecting the set of goal nodes for the search algorithm. We show that the integration of a priori information in the LINK algorithms provides faster and more accurate edge linking.
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a final result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.
ERIC Educational Resources Information Center
Petry, Frederick E.; And Others
1993-01-01
Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures. PMID:27411128
The guitar chord-generating algorithm based on complex network
NASA Astrophysics Data System (ADS)
Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais
2016-02-01
This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.
Recording information on protein complexes in an information management system.
Savitsky, Marc; Diprose, Jonathan M; Morris, Chris; Griffiths, Susanne L; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S; Blake, Richard; Stuart, David I; Esnouf, Robert M
2011-08-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein-protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682
Recording information on protein complexes in an information management system.
Savitsky, Marc; Diprose, Jonathan M; Morris, Chris; Griffiths, Susanne L; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S; Blake, Richard; Stuart, David I; Esnouf, Robert M
2011-08-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein-protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described.
Recording information on protein complexes in an information management system
Savitsky, Marc; Diprose, Jonathan M.; Morris, Chris; Griffiths, Susanne L.; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S.; Blake, Richard; Stuart, David I.; Esnouf, Robert M.
2011-01-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein–protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682
Local algorithm for computing complex travel time based on the complex eikonal equation
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing
2016-04-01
The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.
Local algorithm for computing complex travel time based on the complex eikonal equation.
Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing
2016-04-01
The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method. PMID:27176428
Local algorithm for computing complex travel time based on the complex eikonal equation.
Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing
2016-04-01
The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.
Informational properties of neural nets performing algorithmic and logical tasks.
Ritz, B M; Hofacker, G L
1996-06-01
It is argued that the genetic information necessary to encode an algorithmic neural processor tutoring an otherwise randomly connected biological neural net is represented by the entropy of the analogous minimal Turing machine. Such a near-minimal machine is constructed performing the whole range of bivalent propositional logic in n variables. Neural nets computing the same task are presented; their informational entropy can be gauged with reference to the analogous Turing machine. It is also shown that nets with one hidden layer can be trained to perform algorithms solving propositional logic by error back-propagation. PMID:8672562
Informational properties of neural nets performing algorithmic and logical tasks.
Ritz, B M; Hofacker, G L
1996-06-01
It is argued that the genetic information necessary to encode an algorithmic neural processor tutoring an otherwise randomly connected biological neural net is represented by the entropy of the analogous minimal Turing machine. Such a near-minimal machine is constructed performing the whole range of bivalent propositional logic in n variables. Neural nets computing the same task are presented; their informational entropy can be gauged with reference to the analogous Turing machine. It is also shown that nets with one hidden layer can be trained to perform algorithms solving propositional logic by error back-propagation.
Improving the trust algorithm of information in semantic web
NASA Astrophysics Data System (ADS)
Wan, Zong-bao; Min, Jiang
2012-01-01
With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.
Improving the trust algorithm of information in semantic web
NASA Astrophysics Data System (ADS)
Wan, Zong-Bao; Min, Jiang
2011-12-01
With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.
Dynamic information routing in complex networks
NASA Astrophysics Data System (ADS)
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-04-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function.
Dynamic information routing in complex networks
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-01-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257
Imaging for dismantlement verification: information management and analysis algorithms
Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.
2010-09-01
The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.
Algorithmic complexity of growth hormone release in humans.
Prank, K; Wagner, M; Brabant, G
1997-01-01
Most hormones are secreted in an pulsatile rather than in a constant manner. This temporal pattern of pulsatile hormone release plays an important role in the regulation of cellular function and structure. In healthy humans growth hormone (GH) secretion is characterized by distinct pulses whereas patients bearing a GH producing tumor accompanied with excessive secretion (acromegaly) exhibit a highly irregular pattern of GH release. It has been hypothesized that this highly disorderly pattern of GH release in acromegaly arises from random events in the GH-producing tumor under decreased normal control of GH secretion. Using a context-free grammar complexity measure (algorithmic complexity) in conjunction with random surrogate data sets we demonstrate that the temporal pattern of GH release in acromegaly is not significantly different from a variety of stochastic processes. In contrast, normal subjects clearly exhibit deterministic structure in their temporal patterns of GH secretion. Our results support the hypothesis that GH release in acromegaly is due to random events in the GH-producing tumorous cells which might become independent from hypothalamic regulation.
Algorithmic complexity of growth hormone release in humans
Prank, K.; Wagner, M.; Brabant, G.
1996-12-31
Most hormones are secreted in an pulsatile rather than in a constant manner. This temporal pattern of pulsatile hormone release plays an important role in the regulation of cellular function and structure. In healthy humans growth hormone (GH) secretion is characterized by distinct pulses whereas patients bearing a GH producing tumor accompanied with excessive secretion (acromegaly) exhibit a highly irregular pattern of GH release. It has been hypothesized that this highly disorderly pattern of GH release in acromegaly arises from random events in the GH-producing tumor under decreased normal control of GH secretion. Using a context-free grammar complexity measure (algorithmic complexity) in conjunction with random surrogate data sets we demonstrate that the temporal pattern of GH release in acromegaly is not significantly different from a variety of stochastic processes. In contrast, normal subjects clearly exhibit deterministic structure in their temporal patterns of GH secretion. Our results support the hypothesis that GH release in acromegaly is due to random events in the GH-producing tumorous cells which might become independent from hypothalamic regulation. 17 refs., 1 fig., 2 tabs.
Approach to complex upper extremity injury: an algorithm.
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-02-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
Presentation Media, Information Complexity, and Learning Outcomes
ERIC Educational Resources Information Center
Andres, Hayward P.; Petersen, Candice
2002-01-01
Cognitive processing limitations restrict the number of complex information items held and processed in human working memory. To overcome such limitations, a verbal working memory channel is used to construct an if-then proposition representation of facts and a visual working memory channel is used to construct a visual imagery of geometric…
Darwinian demons, evolutionary complexity, and information maximization
NASA Astrophysics Data System (ADS)
Krakauer, David C.
2011-09-01
Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity—the organismal contribution to the construction of selection pressures—might in principle lead to its increase, bounded by thermodynamic free energy required for error correction.
A Motion Detection Algorithm Using Local Phase Information.
Lazar, Aurel A; Ukani, Nikul H; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm.
A Motion Detection Algorithm Using Local Phase Information.
Lazar, Aurel A; Ukani, Nikul H; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Amirfattahi, Rassoul
2013-10-01
Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.
An Iterative Decoding Algorithm for Fusion of Multimodal Information
NASA Astrophysics Data System (ADS)
Shivappa, Shankar T.; Rao, Bhaskar D.; Trivedi, Mohan M.
2007-12-01
Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.
Bramble, J M
1989-02-01
Data compression increases the number of images that can be stored on magnetic disks or tape and reduces the time required for transmission of images between stations. Two algorithms for data compression are compared in application to computed tomographic (CT) images. The first, an information-preserving algorithm combining differential and Huffman encoding, allows reconstruction of the original image. A second algorithm alters the image in a clinically acceptable manner. This second algorithm combines two processes: the suppression of data outside of the head or body and the combination of differential and Huffman encoding. Because the final image is not an exact copy, the second algorithm is information losing. Application of the information-preserving algorithm can double or triple the number of CT images that can be stored on hard disk or magnetic tape. This algorithm may also double or triple the speed with which images may be transmitted. The information-losing algorithm can increase storage or transmission speed by a factor of five. The computation time on this system is excessive, but dedicated hardware is available to allow efficient implementation.
Retaining local image information in gamut mapping algorithms.
Zolliker, Peter; Simon, Klaus
2007-03-01
Our topic is the potential of combining global gamut mapping with spatial methods to retain the percepted local image information in gamut mapping algorithms. The main goal is to recover the original local contrast between neighboring pixels in addition to the usual optimization of preserving lightness, saturation, and global contrast. Special emphasis is placed on avoiding artifacts introduced by the gamut mapping algorithm itself. We present an unsharp masking technique based on an edge-preserving smoothing algorithm allowing to avoid halo artifacts. The good performance of the presented approach is verified by a psycho-visual experiment using newspaper printing as a representative of a small destination gamut application. Furthermore, the improved mapping properties are documented with local mapping histograms. PMID:17357727
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Pande, S.
2009-12-01
Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure
Fast and stable algorithms for computing the principal square root of a complex matrix
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.
1987-01-01
This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Hyperbolic mapping of complex networks based on community information
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Li, Qingguang; Jin, Fengdong; Xiong, Wei; Wu, Yao
2016-08-01
To improve the hyperbolic mapping methods both in terms of accuracy and running time, a novel mapping method called Community and Hyperbolic Mapping (CHM) is proposed based on community information in this paper. Firstly, an index called Community Intimacy (CI) is presented to measure the adjacency relationship between the communities, based on which a community ordering algorithm is introduced. According to the proposed Community-Sector hypothesis, which supposes that most nodes of one community gather in a same sector in hyperbolic space, CHM maps the ordered communities into hyperbolic space, and then the angular coordinates of nodes are randomly initialized within the sector that they belong to. Therefore, all the network nodes are so far mapped to hyperbolic space, and then the initialized angular coordinates can be optimized by employing the information of all nodes, which can greatly improve the algorithm precision. By applying the proposed dual-layer angle sampling method in the optimization procedure, CHM reduces the time complexity to O(n2) . The experiments show that our algorithm outperforms the state-of-the-art methods.
A Novel Complex Networks Clustering Algorithm Based on the Core Influence of Nodes
Dai, Bin; Xie, Zhongyu
2014-01-01
In complex networks, cluster structure, identified by the heterogeneity of nodes, has become a common and important topological property. Network clustering methods are thus significant for the study of complex networks. Currently, many typical clustering algorithms have some weakness like inaccuracy and slow convergence. In this paper, we propose a clustering algorithm by calculating the core influence of nodes. The clustering process is a simulation of the process of cluster formation in sociology. The algorithm detects the nodes with core influence through their betweenness centrality, and builds the cluster's core structure by discriminant functions. Next, the algorithm gets the final cluster structure after clustering the rest of the nodes in the network by optimizing method. Experiments on different datasets show that the clustering accuracy of this algorithm is superior to the classical clustering algorithm (Fast-Newman algorithm). It clusters faster and plays a positive role in revealing the real cluster structure of complex networks precisely. PMID:24741359
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
modern research overarching all of the traditional scientific disciplines. The transportation networks of planes, highways and railroads; the economic networks of global finance and stock markets; the social networks of terrorism, governments, businesses and churches; the physical networks of telephones, the Internet, earthquakes and global warming and the biological networks of gene regulation, the human body, clusters of neurons and food webs, share a number of apparently universal properties as the networks become increasingly complex. Ubiquitous aspects of such complex networks are the appearance of non-stationary and non-ergodic statistical processes and inverse power-law statistical distributions. Herein we review the traditional dynamical and phase-space methods for modeling such networks as their complexity increases and focus on the limitations of these procedures in explaining complex networks. Of course we will not be able to review the entire nascent field of network science, so we limit ourselves to a review of how certain complexity barriers have been surmounted using newly applied theoretical concepts such as aging, renewal, non-ergodic statistics and the fractional calculus. One emphasis of this review is information transport between complex networks, which requires a fundamental change in perception that we express as a transition from the familiar stochastic resonance to the new concept of complexity matching.
Informational analysis involving application of complex information system
NASA Astrophysics Data System (ADS)
Ciupak, Clébia; Vanti, Adolfo Alberto; Balloni, Antonio José; Espin, Rafael
The aim of the present research is performing an informal analysis for internal audit involving the application of complex information system based on fuzzy logic. The same has been applied in internal audit involving the integration of the accounting field into the information systems field. The technological advancements can provide improvements to the work performed by the internal audit. Thus we aim to find, in the complex information systems, priorities for the work of internal audit of a high importance Private Institution of Higher Education. The applied method is quali-quantitative, as from the definition of strategic linguistic variables it was possible to transform them into quantitative with the matrix intersection. By means of a case study, where data were collected via interview with the Administrative Pro-Rector, who takes part at the elaboration of the strategic planning of the institution, it was possible to infer analysis concerning points which must be prioritized at the internal audit work. We emphasize that the priorities were identified when processed in a system (of academic use). From the study we can conclude that, starting from these information systems, audit can identify priorities on its work program. Along with plans and strategic objectives of the enterprise, the internal auditor can define operational procedures to work in favor of the attainment of the objectives of the organization.
NASA Astrophysics Data System (ADS)
Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang
2012-12-01
In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
Three subsets of sequence complexity and their relevance to biopolymeric information
Abel, David L; Trevors, Jack T
2005-01-01
Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). PMID:16095527
Three subsets of sequence complexity and their relevance to biopolymeric information.
Abel, David L; Trevors, Jack T
2005-08-11
Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).
Optical tomographic memories: algorithms for the efficient information readout
NASA Astrophysics Data System (ADS)
Pantelic, Dejan V.
1990-07-01
Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different
Patent information - towards simplicity or complexity?
NASA Astrophysics Data System (ADS)
Shenton, Written By Kathleen; Norton, Peter; Onodera, Translated By Natsuo
Since the advent of online services, the ability to search and find chemical patent information has improved immeasurably. Recently, integration of a multitude of files (through file merging as well as cross-file/simultaneous searches), 'intelligent' interfaces and optical technology for large amounts of data seem to achieve greater simplicity and convenience in the retrieval of patent information. In spite of these progresses, there is more essential problem which increases complexity. It is a tendency to expand indefinitely the range of claim for chemical substances by a ultra-generic description of structure (overuse of optional substituents, variable divalent groups, repeating groups, etc.) and long listing of prophetic examples. Not only does this tendency worry producers and searchers of patent databases but also prevents truly worthy inventions in future.
Contrast enhancement algorithm considering surrounding information by illumination image
NASA Astrophysics Data System (ADS)
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2014-09-01
We propose a contrast enhancement algorithm considering surrounding information by illumination image. Conventional contrast enhancement techniques can be classified as a retinex-based method and a tone mapping function-based method. However, many retinex methods suffer from high-computational costs or halo artifacts. To cope with these problems, efficient edge-preserving smoothing methods have been researched. Tone mapping function-based methods are limited in terms of enhancement since they are applied without considering surrounding information. To solve these problems, we estimate an illumination image with local adaptive smoothness, and then utilize it as surrounding information. The local adaptive smoothness is calculated by using illumination image properties and an edge-adaptive filter based on the just noticeable difference model. Additionally, we employ a resizing method instead of a blur kernel to reduce the computational cost of illumination estimation. The estimated illumination image is incorporated with the tone mapping function to address the limitations of the tone mapping function-based method. With this approach, the amount of local contrast enhancement is increased. Experimental results show that the proposed algorithm enhances both global and local contrasts and produces better performance in objective evaluation metrics while preventing a halo artifact.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
On the Time Complexity of Dijkstra's Three-State Mutual Exclusion Algorithm
NASA Astrophysics Data System (ADS)
Kimoto, Masahiro; Tsuchiya, Tatsuhiro; Kikuno, Tohru
In this letter we give a lower bound on the worst-case time complexity of Dijkstra's three-state mutual exclusion algorithm by specifying a concrete behavior of the algorithm. We also show that our result is more accurate than the known best bound.
Determination of multifractal dimensions of complex networks by means of the sandbox algorithm
NASA Astrophysics Data System (ADS)
Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo
2015-02-01
Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.
Determination of multifractal dimensions of complex networks by means of the sandbox algorithm.
Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo
2015-02-01
Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
Reality Check Algorithm for Complex Sources in Early Warning
NASA Astrophysics Data System (ADS)
Karakus, G.; Heaton, T. H.
2013-12-01
In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.
Information Technology in Complex Health Services
Southon, Frank Charles Gray; Sauer, Chris; Dampney, Christopher Noel Grant (Kit)
1997-01-01
Abstract Objective: To identify impediments to the successful transfer and implementation of packaged information systems through large, divisionalized health services. Design: A case analysis of the failure of an implementation of a critical application in the Public Health System of the State of New South Wales, Australia, was carried out. This application had been proven in the United States environment. Measurements: Interviews involving over 60 staff at all levels of the service were undertaken by a team of three. The interviews were recorded and analyzed for key themes, and the results were shared and compared to enable a continuing critical assessment. Results: Two components of the transfer of the system were considered: the transfer from a different environment, and the diffusion throughout a large, divisionalized organization. The analyses were based on the Scott-Morton organizational fit framework. In relation to the first, it was found that there was a lack of fit in the business environments and strategies, organizational structures and strategy-structure pairing as well as the management process-roles pairing. The diffusion process experienced problems because of the lack of fit in the strategy-structure, strategy-structure-management processes, and strategy-structure-role relationships. Conclusion: The large-scale developments of integrated health services present great challenges to the efficient and reliable implementation of information technology, especially in large, divisionalized organizations. There is a need to take a more sophisticated approach to understanding the complexities of organizational factors than has traditionally been the case. PMID:9067877
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
2014-10-01
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
2014-10-01
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363
Measurement and Information Extraction in Complex Dynamics Quantum Computation
NASA Astrophysics Data System (ADS)
Casati, Giulio; Montangero, Simone
Quantum Information processing has several di.erent applications: some of them can be performed controlling only few qubits simultaneously (e.g. quantum teleportation or quantum cryptography) [1]. Usually, the transmission of large amount of information is performed repeating several times the scheme implemented for few qubits. However, to exploit the advantages of quantum computation, the simultaneous control of many qubits is unavoidable [2]. This situation increases the experimental di.culties of quantum computing: maintaining quantum coherence in a large quantum system is a di.cult task. Indeed a quantum computer is a many-body complex system and decoherence, due to the interaction with the external world, will eventually corrupt any quantum computation. Moreover, internal static imperfections can lead to quantum chaos in the quantum register thus destroying computer operability [3]. Indeed, as it has been shown in [4], a critical imperfection strength exists above which the quantum register thermalizes and quantum computation becomes impossible. We showed such e.ects on a quantum computer performing an e.cient algorithm to simulate complex quantum dynamics [5,6].
An information-bearing seed for nucleating algorithmic self-assembly.
Barish, Robert D; Schulman, Rebecca; Rothemund, Paul W K; Winfree, Erik
2009-04-14
Self-assembly creates natural mineral, chemical, and biological structures of great complexity. Often, the same starting materials have the potential to form an infinite variety of distinct structures; information in a seed molecule can determine which form is grown as well as where and when. These phenomena can be exploited to program the growth of complex supramolecular structures, as demonstrated by the algorithmic self-assembly of DNA tiles. However, the lack of effective seeds has limited the reliability and yield of algorithmic crystals. Here, we present a programmable DNA origami seed that can display up to 32 distinct binding sites and demonstrate the use of seeds to nucleate three types of algorithmic crystals. In the simplest case, the starting materials are a set of tiles that can form crystalline ribbons of any width; the seed directs assembly of a chosen width with >90% yield. Increased structural diversity is obtained by using tiles that copy a binary string from layer to layer; the seed specifies the initial string and triggers growth under near-optimal conditions where the bit copying error rate is <0.2%. Increased structural complexity is achieved by using tiles that generate a binary counting pattern; the seed specifies the initial value for the counter. Self-assembly proceeds in a one-pot annealing reaction involving up to 300 DNA strands containing >17 kb of sequence information. In sum, this work demonstrates how DNA origami seeds enable the easy, high-yield, low-error-rate growth of algorithmic crystals as a route toward programmable bottom-up fabrication.
An algorithm for automatic reduction of complex signal flow graphs
NASA Technical Reports Server (NTRS)
Young, K. R.; Hoberock, L. L.; Thompson, J. G.
1976-01-01
A computer algorithm is developed that provides efficient means to compute transmittances directly from a signal flow graph or a block diagram. Signal flow graphs are cast as directed graphs described by adjacency matrices. Nonsearch computation, designed for compilers without symbolic capability, is used to identify all arcs that are members of simple cycles for use with Mason's gain formula. The routine does not require the visual acumen of an interpreter to reduce the topology of the graph, and it is particularly useful for analyzing control systems described for computer analyses by means of interactive graphics.
Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi
2016-01-01
The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156
An Algorithm Combining for Objective Prediction with Subjective Forecast Information
NASA Astrophysics Data System (ADS)
Choi, JunTae; Kim, SooHyun
2016-04-01
As direct or post-processed output from numerical weather prediction (NWP) models has begun to show acceptable performance compared with the predictions of human forecasters, many national weather centers have become interested in automatic forecasting systems based on NWP products alone, without intervention from human forecasters. The Korea Meteorological Administration (KMA) is now developing an automatic forecasting system for dry variables. The forecasts are automatically generated from NWP predictions using a post processing model (MOS). However, MOS cannot always produce acceptable predictions, and sometimes its predictions are rejected by human forecasters. In such cases, a human forecaster should manually modify the prediction consistently at points surrounding their corrections, using some kind of smart tool to incorporate the forecaster's opinion. This study introduces an algorithm to revise MOS predictions by adding a forecaster's subjective forecast information at neighbouring points. A statistical relation between two forecast points - a neighbouring point and a dependent point - was derived for the difference between a MOS prediction and that of a human forecaster. If the MOS prediction at a neighbouring point is updated by a human forecaster, the value at a dependent point is modified using a statistical relationship based on linear regression, with parameters obtained from a one-year dataset of MOS predictions and official forecast data issued by KMA. The best sets of neighbouring points and dependent point are statistically selected. According to verification, the RMSE of temperature predictions produced by the new algorithm was slightly lower than that of the original MOS predictions, and close to the RMSE of subjective forecasts. For wind speed and relative humidity, the new algorithm outperformed human forecasters.
A low computational complexity algorithm for ECG signal compression.
Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; López-Ferreras, Francisco; Bravo-Santos, Angel; Martínez-Muñoz, Damián
2004-09-01
In this work, a filter bank-based algorithm for electrocardiogram (ECG) signals compression is proposed. The new coder consists of three different stages. In the first one--the subband decomposition stage--we compare the performance of a nearly perfect reconstruction (N-PR) cosine-modulated filter bank with the wavelet packet (WP) technique. Both schemes use the same coding algorithm, thus permitting an effective comparison. The target of the comparison is the quality of the reconstructed signal, which must remain within predetermined accuracy limits. We employ the most widely used quality criterion for the compressed ECG: the percentage root-mean-square difference (PRD). It is complemented by means of the maximum amplitude error (MAX). The tests have been done for the 12 principal cardiac leads, and the amount of compression is evaluated by means of the mean number of bits per sample (MBPS) and the compression ratio (CR). The implementation cost for both the filter bank and the WP technique has also been studied. The results show that the N-PR cosine-modulated filter bank method outperforms the WP technique in both quality and efficiency. PMID:15271283
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
On the complexity of classical and quantum algorithms for numerical problems in quantum mechanics
NASA Astrophysics Data System (ADS)
Bessen, Arvid J.
Our understanding of complex quantum mechanical processes is limited by our inability to solve the equations that govern them except for simple cases. Numerical simulation of quantum systems appears to be our best option to understand, design and improve quantum systems. It turns out, however, that computational problems in quantum mechanics are notoriously difficult to treat numerically. The computational time that is required often scales exponentially with the size of the problem. One of the most radical approaches for treating quantum problems was proposed by Feytiman in 1982 [46]: he suggested that quantum mechanics itself showed a promising way to simulate quantum physics. This idea, the so called quantum computer, showed its potential convincingly in one important regime with the development of Shor's integer factorization algorithm which improves exponentially on the best known classical algorithm. In this thesis we explore six different computational problems from quantum mechanics, study their computational complexity and try to find ways to remedy them. In the first problem we investigate the reasons behind the improved performance of Shor's and similar algorithms. We show that the key quantum part in Shor's algorithm, the quantum phase estimation algorithm, achieves its good performance through the use of power queries and we give lower bounds for all phase estimation algorithms that use power queries that match the known upper bounds. Our research indicates that problems that allow the use of power queries will achieve similar exponential improvements over classical algorithms. We then apply our lower bound technique for power queries to the Sturm-Liouville eigenvalue problem and show matching lower bounds to the upper bounds of Papageorgiou and Wozniakowski [85]. It seems to be very difficult, though, to find nontrivial instances of the Sturm-Lionville problem for which power queries can be simulated efficiently. A quantum computer differs from a
Unwinding the hairball graph: Pruning algorithms for weighted complex networks.
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
NASA Astrophysics Data System (ADS)
Sahu, Swagatika; Mohanty, Saumendra; Srivastav, Richa
2013-01-01
Orthogonal Frequency Division Multiplexing (OFDM) is an emerging multi-carrier modulation scheme, which has been adopted for several wireless standards such as IEEE 802.11a and HiperLAN2, etc. A well-known problem of OFDM is its sensitivity to frequency offset between the transmitted and received carrier frequencies. In (OFDM) system Carrier frequency offsets (CFOs) between the transmitter and the receiver destroy the orthogonality between carriers and degrade the system performance significantly. The main problem with frequency offset is that it introduces interference among the multiplicity of carriers in the OFDM signal.The conventional algorithms given by P. Moose and Schmidl describes how carrier frequency offset of an OFDM system can be estimated using training sequences. Simulation results show that the improved carrier frequency offset estimation algorithm which uses a complex training sequence for frequency offset estimation, performs better than conventional P. Moose and Schmidl algorithm, which can effectively improve the frequency estimation accuracy and provides a wide acquisition range for the carrier frequency offset with low complexity. This paper introduces the BER comparisons of different algorithms with the Improved Algorithms for different Real and Complex modulations schemes, considering random carrier offsets . This paper also introduces the BER performances with different CFOs for different Real and Complex modulation schemes for the Improved algorithm.
On Distribution Reduction and Algorithm Implementation in Inconsistent Ordered Information Systems
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems. PMID:25258721
Combining algorithms in automatic detection of QRS complexes in ECG signals.
Meyer, Carsten; Fernández Gavela, José; Harris, Matthew
2006-07-01
QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Zaneveld, Jesse R. R.; Thurber, Rebecca L. V.
2014-01-01
Complex symbioses between animal or plant hosts and their associated microbiotas can involve thousands of species and millions of genes. Because of the number of interacting partners, it is often impractical to study all organisms or genes in these host-microbe symbioses individually. Yet new phylogenetic predictive methods can use the wealth of accumulated data on diverse model organisms to make inferences into the properties of less well-studied species and gene families. Predictive functional profiling methods use evolutionary models based on the properties of studied relatives to put bounds on the likely characteristics of an organism or gene that has not yet been studied in detail. These techniques have been applied to predict diverse features of host-associated microbial communities ranging from the enzymatic function of uncharacterized genes to the gene content of uncultured microorganisms. We consider these phylogenetically informed predictive techniques from disparate fields as examples of a general class of algorithms for Hidden State Prediction (HSP), and argue that HSP methods have broad value in predicting organismal traits in a variety of contexts, including the study of complex host-microbe symbioses. PMID:25202302
Zaneveld, Jesse R R; Thurber, Rebecca L V
2014-01-01
Complex symbioses between animal or plant hosts and their associated microbiotas can involve thousands of species and millions of genes. Because of the number of interacting partners, it is often impractical to study all organisms or genes in these host-microbe symbioses individually. Yet new phylogenetic predictive methods can use the wealth of accumulated data on diverse model organisms to make inferences into the properties of less well-studied species and gene families. Predictive functional profiling methods use evolutionary models based on the properties of studied relatives to put bounds on the likely characteristics of an organism or gene that has not yet been studied in detail. These techniques have been applied to predict diverse features of host-associated microbial communities ranging from the enzymatic function of uncharacterized genes to the gene content of uncultured microorganisms. We consider these phylogenetically informed predictive techniques from disparate fields as examples of a general class of algorithms for Hidden State Prediction (HSP), and argue that HSP methods have broad value in predicting organismal traits in a variety of contexts, including the study of complex host-microbe symbioses. PMID:25202302
Wan, Xinwang; Liang, Juan
2013-01-01
This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
An improved label propagation algorithm using average node energy in complex networks
NASA Astrophysics Data System (ADS)
Peng, Hao; Zhao, Dandan; Li, Lin; Lu, Jianfeng; Han, Jianmin; Wu, Songyang
2016-10-01
Detecting overlapping community structure can give a significant insight into structural and functional properties in complex networks. In this Letter, we propose an improved label propagation algorithm (LPA) to uncover overlapping community structure. After mapping nodes into random variables, the algorithm calculates variance of each node and the proposed average node energy. The nodes whose variances are less than a tunable threshold are regarded as bridge nodes and meanwhile changing the given threshold can uncover some latent bridge node. Simulation results in real-world and artificial networks show that the improved algorithm is efficient in revealing overlapping community structures.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
NASA Astrophysics Data System (ADS)
Zhao, GuoDong; Wu, Yan; Ren, YuanFang; Zhu, Ming
2013-01-01
Community structure is an important feature in many real-world networks, which can help us understand structure and function in complex networks better. In recent years, there have been many algorithms proposed to detect community structure in complex networks. In this paper, we try to detect potential community beams whose link strengths are greater than surrounding links and propose the minimum coupling distance (MCD) between community beams. Based on MCD, we put forward an optimization heuristic algorithm (EAMCD) for modularity density function to welded these community beams into community frames which are seen as a core part of community. Using the principle of random walk, we regard the remaining nodes into the community frame to form a community. At last, we merge several small community frame fragments using local greedy strategy for the modularity density general function. Real-world and synthetic networks are used to demonstrate the effectiveness of our algorithm in detecting communities in complex networks.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
An Introduction to Genetic Algorithms and to Their Use in Information Retrieval.
ERIC Educational Resources Information Center
Jones, Gareth; And Others
1994-01-01
Genetic algorithms, a class of nondeterministic algorithms in which the role of chance makes the precise nature of a solution impossible to guarantee, seem to be well suited to combinatorial-optimization problems in information retrieval. Provides an introduction to techniques and characteristics of genetic algorithms and illustrates their…
NASA Astrophysics Data System (ADS)
Chen, Lei; Li, Dehua; Yang, Jie
2007-12-01
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.
Fisher Information and Complexity Measure of Generalized Morse Potential Model
NASA Astrophysics Data System (ADS)
Onate, C. A.; Idiodi, J. O. A.
2016-09-01
The spreading of the quantum-mechanical probability distribution density of the three-dimensional system is quantitatively determined by means of the local information-theoretic quantity of the Shannon information and information energy in both position and momentum spaces. The complexity measure which is equivalent to Cramer–Rao uncertainty product is determined. We have obtained the information content stored, the concentration of quantum system and complexity measure numerically for n = 0, 1, 2 and 3 respectively.
Do the Visual Complexity Algorithms Match the Generalization Process in Geographical Displays?
NASA Astrophysics Data System (ADS)
Brychtová, A.; Çöltekin, A.; Pászto, V.
2016-06-01
In this study, we first develop a hypothesis that existing quantitative visual complexity measures will overall reflect the level of cartographic generalization, and test this hypothesis. Specifically, to test our hypothesis, we first selected common geovisualization types (i.e., cartographic maps, hybrid maps, satellite images and shaded relief maps) and retrieved examples as provided by Google Maps, OpenStreetMap and SchweizMobil by swisstopo. Selected geovisualizations vary in cartographic design choices, scene contents and different levels of generalization. Following this, we applied one of Rosenholtz et al.'s (2007) visual clutter algorithms to obtain quantitative visual complexity scores for screenshots of the selected maps. We hypothesized that visual complexity should be constant across generalization levels, however, the algorithm suggested that the complexity of small-scale displays (less detailed) is higher than those of large-scale (high detail). We also observed vast differences in visual complexity among maps providers, which we attribute to their varying approaches towards the cartographic design and generalization process. Our efforts will contribute towards creating recommendations as to how the visual complexity algorithms could be optimized for cartographic products, and eventually be utilized as a part of the cartographic design process to assess the visual complexity.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Quantifying networks complexity from information geometry viewpoint
Felice, Domenico Mancini, Stefano; Pettini, Marco
2014-04-15
We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy on simplexes and find that it monotonically increases with their dimension.
Can complexity science inform physician leadership development?
Grady, Colleen Marie
2016-07-01
Purpose The purpose of this paper is to describe research that examined physician leadership development using complexity science principles. Design/methodology/approach Intensive interviewing of 21 participants and document review provided data regarding physician leadership development in health-care organizations using five principles of complexity science (connectivity, interdependence, feedback, exploration-of-the-space-of-possibilities and co-evolution), which were grouped in three areas of inquiry (relationships between agents, patterns of behaviour and enabling functions). Findings Physician leaders are viewed as critical in the transformation of healthcare and in improving patient outcomes, and yet significant challenges exist that limit their development. Leadership in health care continues to be associated with traditional, linear models, which are incongruent with the behaviour of a complex system, such as health care. Physician leadership development remains a low priority for most health-care organizations, although physicians admit to being limited in their capacity to lead. This research was based on five principles of complexity science and used grounded theory methodology to understand how the behaviours of a complex system can provide data regarding leadership development for physicians. The study demonstrated that there is a strong association between physician leadership and patient outcomes and that organizations play a primary role in supporting the development of physician leaders. Findings indicate that a physician's relationship with their patient and their capacity for innovation can be extended as catalytic behaviours in a complex system. The findings also identified limiting factors that impact physicians who choose to lead, such as reimbursement models that do not place value on leadership and medical education that provides minimal opportunity for leadership skill development. Practical Implications This research provides practical
Co-allocation model for complex equipment project risk based on gray information
NASA Astrophysics Data System (ADS)
Zhi-geng, Fang; Jin-yu, Sun
2013-10-01
As the fact that complex equipment project is a multi-level co-development network system and milestones connect with each other in accordance with the logical relationship between different levels, we can decompose the complex equipment project into several multi-level milestones. This paper has designed several connecting nodes of collaborative milestone and established a new co-allocation model for complex equipment project risk based on gray information. Take comprehensive trial phase of a large aircraft developed project as an example to prove the effectiveness and feasibility of the above models and algorithms, which provides a new analysis methods and research ideas.
Algorithmic complexity for short binary strings applied to psychology: a primer.
Gauvrit, Nicolas; Zenil, Hector; Delahaye, Jean-Paul; Soler-Toscano, Fernando
2014-09-01
As human randomness production has come to be more closely studied and used to assess executive functions (especially inhibition), many normative measures for assessing the degree to which a sequence is randomlike have been suggested. However, each of these measures focuses on one feature of randomness, leading researchers to have to use multiple measures. Although algorithmic complexity has been suggested as a means for overcoming this inconvenience, it has never been used, because standard Kolmogorov complexity is inapplicable to short strings (e.g., of length l ≤ 50), due to both computational and theoretical limitations. Here, we describe a novel technique (the coding theorem method) based on the calculation of a universal distribution, which yields an objective and universal measure of algorithmic complexity for short strings that approximates Kolmogorov-Chaitin complexity.
Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm
He, Xiaoqi; Zheng, Zizhao; Hu, Chao
2015-01-01
The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg–Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher “denoise” capacity, with a larger range for initial guess values. PMID:25914561
Fast algorithm for minutiae matching based on multiple-ridge information
NASA Astrophysics Data System (ADS)
Wang, Guoyou; Hu, Jing
2001-09-01
Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.
NASA Astrophysics Data System (ADS)
Debiane, L.; Ivorra, B.; Mohammadi, B.; Nicoud, F.; Poinsot, T.; Ern, A.; Pitsch, H.
2006-02-01
Controlling flame shapes and emissions is a major objective for all combustion engineers. Considering the complexity of reacting flows, novel optimization methods are required: this paper explores the application of control theory for partial differential equations to combustion. Both flame temperature and pollutant levels are optimized in a laminar Bunsen burner computed with complex chemistry using a recursive semi-deterministic global optimization algorithm. In order to keep the computational time low, the optimization procedure is coupled with mesh adaptation and incomplete gradient techniques.
NASA Astrophysics Data System (ADS)
Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.
1992-12-01
Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.
NASA Technical Reports Server (NTRS)
Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.
1992-01-01
Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks.
Shaw, F Z; Chen, R F; Tsao, H W; Yen, C T
1999-11-15
This study introduces algorithmic complexity to measure characteristics of brain functions. The EEG of the rat was recorded with implanted electrodes. The normalized complexity value was relatively independent of data length, and it showed a simpler and easier calculation characteristic than other non-linear indexes. The complexity index revealed significant differences among awake, asleep, and anesthetized states. It may be useful in tracking short-term and long-term changes in brain functions, such as anesthetized depth, drug effects, or sleep-wakefulness.
A novel low-complexity post-processing algorithm for precise QRS localization.
Fonseca, Pedro; Aarts, Ronald M; Foussier, Jérôme; Long, Xi
2014-01-01
Precise localization of QRS complexes is an essential step in the analysis of small transient changes in instant heart rate and before signal averaging in QRS morphological analysis. Most localization algorithms reported in literature are either not robust to artifacts, depend on the sampling rate of the ECG recordings or are too computationally expensive for real-time applications, especially in low-power embedded devices. This paper proposes a localization algorithm based on the intersection of tangents fitted to the slopes of R waves detected by any QRS detector. Despite having a lower complexity, this algorithm achieves comparable trigger jitter to more complex localization methods without requiring the data to first be upsampled. It also achieves high localization precision regardless of which QRS detector is used as input. It is robust to clipping artifacts and to noise, achieving an average localization error below 2 ms and a trigger jitter below 1 ms on recordings where no additional artifacts were added, and below 8 ms for recordings where the signal was severely degraded. Finally, it increases the accuracy of template-based false positive rejection, allowing nearly all mock false positives added to a set of QRS detections to be removed at the cost of a very small decrease in sensitivity. The localization algorithm proposed is particularly well-suited for implementation in embedded, low-power devices for real-time applications. PMID:26034664
A novel low-complexity post-processing algorithm for precise QRS localization.
Fonseca, Pedro; Aarts, Ronald M; Foussier, Jérôme; Long, Xi
2014-01-01
Precise localization of QRS complexes is an essential step in the analysis of small transient changes in instant heart rate and before signal averaging in QRS morphological analysis. Most localization algorithms reported in literature are either not robust to artifacts, depend on the sampling rate of the ECG recordings or are too computationally expensive for real-time applications, especially in low-power embedded devices. This paper proposes a localization algorithm based on the intersection of tangents fitted to the slopes of R waves detected by any QRS detector. Despite having a lower complexity, this algorithm achieves comparable trigger jitter to more complex localization methods without requiring the data to first be upsampled. It also achieves high localization precision regardless of which QRS detector is used as input. It is robust to clipping artifacts and to noise, achieving an average localization error below 2 ms and a trigger jitter below 1 ms on recordings where no additional artifacts were added, and below 8 ms for recordings where the signal was severely degraded. Finally, it increases the accuracy of template-based false positive rejection, allowing nearly all mock false positives added to a set of QRS detections to be removed at the cost of a very small decrease in sensitivity. The localization algorithm proposed is particularly well-suited for implementation in embedded, low-power devices for real-time applications.
Pitschner, H F; Berkowitsch, A
2001-01-01
Symbolic dynamics as a non linear method and computation of the normalized algorithmic complexity (C alpha) was applied to basket-catheter mapping of atrial fibrillation (AF) in the right human atrium. The resulting different degrees of organisation of AF have been compared to conventional classification of Wells. Short time temporal and spatial distribution of the C alpha during AF and effects of propafenone on this distribution have been investigated in 30 patients. C alpha was calculated for a moving window. Generated C alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C alpha distribution was statistically analyzed. Inter-regional C alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C alpha increase in cranio-caudal direction was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Face detection in complex background based on Adaboost algorithm and YCbCr skin color model
NASA Astrophysics Data System (ADS)
Ge, Wei; Han, Chunling; Quan, Wei
2015-12-01
Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.
An algorithm and a fortran program (chemequil-2) for calculation of complex equilibria.
Tripathi, V S
1986-12-01
A computer program, CHEMEQUIL-2 (CHEMical EQUILibrium), based on interfacing an iterative algorithm with the Newton-Raphson method, for calculating equilibrium compositions in aqueous mixtures of metals and ligands, is described. The program is also capable of simulating acid-base titrations. It has been compared with MINIQUAD, COMPLEX and MINEQL with respect to execution time and memory requirements. As a result of algorithm development and program design, CHEMEQUIL-2 offers considerable savings in both execution time (by 1-2 orders of magnitude) and memory requirements, especially for large problems, compared to these programs. The computational efficiency of CHEMEQUIL-2 makes it well suited for use in hydrogeochemieal transport models.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Lee, S. H.; van der Werf, J. H. J.
2016-01-01
Summary: We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method is substantially faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss the advantages and limitations. Availability and implementation: MTG2 is available in https://sites.google.com/site/honglee0707/mtg2. Contact: hong.lee@une.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755623
Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.
Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S
2013-01-01
The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.
Marucci, Evandro A.; Neves, Leandro A.; Valêncio, Carlo R.; Pinto, Alex R.; Cansian, Adriano M.; de Souza, Rogeria C. G.; Shiyou, Yang; Machado, José M.
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
Marucci, Evandro A; Zafalon, Geraldo F D; Momente, Julio C; Neves, Leandro A; Valêncio, Carlo R; Pinto, Alex R; Cansian, Adriano M; de Souza, Rogeria C G; Shiyou, Yang; Machado, José M
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This
Infrared image non-rigid registration based on regional information entropy demons algorithm
NASA Astrophysics Data System (ADS)
Lu, Chaoliang; Ma, Lihua; Yu, Ming; Cui, Shumin; Wu, Qingrong
2015-02-01
Infrared imaging fault detection which is treated as an ideal, non-contact, non-destructive testing method is applied to the circuit board fault detection. Since Infrared images obtained by handheld infrared camera with wide-angle lens have both rigid and non-rigid deformations. To solve this problem, a new demons algorithm based on regional information entropy was proposed. The new method overcame the shortcomings of traditional demons algorithm that was sensitive to the intensity. First, the information entropy image was gotten by computing regional information entropy of the image. Then, the deformation between the two images was calculated that was the same as demons algorithm. Experimental results demonstrated that the proposed algorithm has better robustness in intensity inconsistent images registration compared with the traditional demons algorithm. Achieving accurate registration between intensity inconsistent infrared images provided strong support for the temperature contrast.
A novel algorithm for simplification of complex gene classifiers in cancer.
Wilson, Raphael A; Teng, Ling; Bachmeyer, Karen M; Bissonnette, Mei Lin Z; Husain, Aliya N; Parham, David M; Triche, Timothy J; Wing, Michele R; Gastier-Foster, Julie M; Barr, Frederic G; Hawkins, Douglas S; Anderson, James R; Skapek, Stephen X; Volchenboum, Samuel L
2013-09-15
The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing 50-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct datasets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to show the generalizability of our algorithm, we applied it to a lung cancer dataset to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Freeberg, Todd M
2006-07-01
One hypothesis to explain variation in vocal communication in animal species is that the complexity of the social group influences the group's vocal complexity. This social-complexity hypothesis for communication is also central to recent arguments regarding the origins of human language, but experimental tests of the hypothesis are lacking. This study investigated whether group size, a fundamental component of social complexity, influences the complexity of a call functioning in the social organization of Carolina chickadees, Poecile carolinensis. In unmanipulated field settings, calls of individuals in larger groups had greater complexity (more information) than calls of individuals in smaller groups. In aviary settings manipulating group size, individuals in larger groups used calls with greater complexity than individuals in smaller groups. These results indicate that social complexity can influence communicative complexity in this species. PMID:16866738
A Survey of Stemming Algorithms in Information Retrieval
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime
2014-01-01
Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…
ERIC Educational Resources Information Center
Chen, Hsinchun
1995-01-01
Presents an overview of artificial-intelligence-based inductive learning techniques and their use in information science research. Three methods are discussed: the connectionist Hopfield network; the symbolic ID3/ID5R; evolution-based genetic algorithms. The knowledge representations and algorithms of these methods are examined in the context of…
Information geometric algorithm for estimating switching probabilities in space-varying HMM.
Nascimento, Jacinto C; Barão, Miguel; Marques, Jorge S; Lemos, João M
2014-12-01
This paper proposes an iterative natural gradient algorithm to perform the optimization of switching probabilities in a space-varying hidden Markov model, in the context of human activity recognition in long-range surveillance. The proposed method is a version of the gradient method, developed under an information geometric viewpoint, where the usual Euclidean metric is replaced by a Riemannian metric on the space of transition probabilities. It is shown that the change in metric provides advantages over more traditional approaches, namely: 1) it turns the original constrained optimization into an unconstrained optimization problem; 2) the optimization behaves asymptotically as a Newton method and yields faster convergence than other methods for the same computational complexity; and 3) the natural gradient vector is an actual contravariant vector on the space of probability distributions for which an interpretation as the steepest descent direction is formally correct. Experiments on synthetic and real-world problems, focused on human activity recognition in long-range surveillance settings, show that the proposed methodology compares favorably with the state-of-the-art algorithms developed for the same purpose.
Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.
Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector
2016-03-01
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.
Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.
Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector
2016-03-01
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology. PMID:25761393
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
Measurement of Information-Based Complexity in Listening.
ERIC Educational Resources Information Center
Bishop, Walton B.
When people say that what they hear is "over their heads," they are describing a severe information-based complexity (I-BC) problem. They cannot understand what is said because some of the information needed is missing, contaminated, and/or costly to obtain. Students often face these I-BC problems, and teachers often exacerbate them. Yet listeners…
Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit
Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph
2015-09-01
The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.
A novel approach to characterize information radiation in complex networks
NASA Astrophysics Data System (ADS)
Wang, Xiaoyang; Wang, Ying; Zhu, Lin; Li, Chao
2016-06-01
The traditional research of information dissemination is mostly based on the virus spreading model that the information is being spread by probability, which does not match very well to the reality, because the information that we receive is always more or less than what was sent. In order to quantitatively describe variations in the amount of information during the spreading process, this article proposes a safety information radiation model on the basis of communication theory, combining with relevant theories of complex networks. This model comprehensively considers the various influence factors when safety information radiates in the network, and introduces some concepts from the communication theory perspective, such as the radiation gain function, receiving gain function, information retaining capacity and information second reception capacity, to describe the safety information radiation process between nodes and dynamically investigate the states of network nodes. On a micro level, this article analyzes the influence of various initial conditions and parameters on safety information radiation through the new model simulation. The simulation reveals that this novel approach can reflect the variation of safety information quantity of each node in the complex network, and the scale-free network has better "radiation explosive power", while the small-world network has better "radiation staying power". The results also show that it is efficient to improve the overall performance of network security by selecting nodes with high degrees as the information source, refining and simplifying the information, increasing the information second reception capacity and decreasing the noises. In a word, this article lays the foundation for further research on the interactions of information and energy between internal components within complex systems.
NASA Astrophysics Data System (ADS)
Yan, Menglong; Blaschke, Thomas; Tang, Hongzhao; Xiao, Chenchao; Sun, Xian; Zhang, Daobing; Fu, Kun
2016-03-01
Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Generating seed points is an initial step for most filtering algorithms, whereas existing algorithms usually define a regular window size to generate seed points. This may lead to an inadequate density of seed points, and further introduce error type I, especially in steep terrain and forested areas. In this study, we propose the use of objectbased analysis to derive surface complexity information from ALS datasets, which can then be used to improve seed point generation.We assume that an area is complex if it is composed of many small objects, with no buildings within the area. Using these assumptions, we propose and implement a new segmentation algorithm based on a grid index, which we call the Edge and Slope Restricted Region Growing (ESRGG) algorithm. Surface complexity information is obtained by statistical analysis of the number of objects derived by segmentation in each area. Then, for complex areas, a smaller window size is defined to generate seed points. Experimental results show that the proposed algorithm could greatly improve the filtering results in complex areas, especially in steep terrain and forested areas.
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
An ant colony based algorithm for overlapping community detection in complex networks
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di
2015-06-01
Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.
Information Center Complex publications and presentations, 1971-1980
Gill, A.B.; Hawthorne, S.W.
1981-08-01
This indexed bibliography lists publications and presentations of the Information Center Complex, Information Division, Oak Ridge National Laboratory, from 1971 through 1980. The 659 entries cover such topics as toxicology, air and water pollution, management and transportation of hazardous wastes, energy resources and conservation, and information science. Publications range in length from 1 page to 3502 pages and include topical reports, books, journal articles, fact sheets, and newsletters. Author, title, and group indexes are provided. Annual updates are planned.
Holledge gauge failure testing using concurrent information processing algorithm
Weeks, G.E.; Daniel, W.E.; Edwards, R.E.; Jannarone, R.J.; Joshi, S.N.; Palakodety, S.S.; Qian, D.
1996-04-11
For several decades, computerized information processing systems and human information processing models have developed with a good deal of mutual influence. Any comprehensive psychology text in this decade uses terms that originated in the computer industry, such as ``cache`` and ``memory``, to describe human information processing. Likewise, many engineers today are using ``artificial intelligence``and ``artificial neural network`` computing tools that originated as models of human thought to solve industrial problems. This paper concerns a recently developed human information processing model, called ``concurrent information processing`` (CIP), and a related set of computing tools for solving industrial problems. The problem of focus is adaptive gauge monitoring; the application is pneumatic pressure repeaters (Holledge gauges) used to measure liquid level and density in the Defense Waste Processing Facility and the Integrated DWPF Melter System.
Using multiple perspectives to suppress information and complexity
Kelsey, R.L. |; Webster, R.B.; Hartley, R.T.
1998-09-01
Dissemination of battlespace information involves getting information to particular warfighters that is both useful and in a form that facilitates the tasks of those particular warfighters. There are two issues which motivate this problem of dissemination. The first issue deals with disseminating pertinent information to a particular warfighter. This can be thought of as information suppression. The second issue deals with facilitating the use of the information by tailoring the computer interface to the specific tasks of an individual warfighter. This can be thought of as interface complexity suppression. This paper presents a framework for suppressing information using an object-based knowledge representation methodology. This methodology has the ability to represent knowledge and information in multiple perspectives. Information can be suppressed by creating a perspective specific to an individual warfighter. In this way, only the information pertinent and useful to a warfighter is made available to that warfighter. Information is not removed, lost, or changed, but spread among multiple perspectives. Interface complexity is managed in a similar manner. Rather than have one generalized computer interface to access all information, the computer interface can be divided into interface elements. Interface elements can then be selected and arranged into a perspective-specific interface. This is done in a manner to facilitate completion of tasks contained in that perspective. A basic battlespace domain containing ground and air elements and associated warfighters is used to exercise the methodology.
NASA Astrophysics Data System (ADS)
Khorasanizade, Sh.; Sousa, J. M. M.
2016-03-01
A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.
Accessible information for people with complex communication needs.
Owens, Janet S
2006-09-01
Information can be empowering if it is accessible. While a number of known information access barriers have been reported for the broader group of people with disabilities, specific information issues for people with complex communication needs have not been previously reported. In this consumer-focused study, the accessibility of information design and dissemination practices were discussed by 17 people with complex communication needs; by eight parents, advocates, therapists, and agency representatives in focus groups; and by seven individuals in individual interviews. Participants explored issues and made recommendations for content, including language, visual and audio supports; print accessibility; physical access; and human support for information access. Consumer-generated accessibility guidelines were an outcome of this study.
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Huber, Felix
2015-10-01
Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded
An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network
Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang
2014-01-01
Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319
Dynamics of information diffusion and its applications on complex networks
NASA Astrophysics Data System (ADS)
Zhang, Zi-Ke; Liu, Chuang; Zhan, Xiu-Xiu; Lu, Xin; Zhang, Chu-Xu; Zhang, Yi-Cheng
2016-09-01
The ongoing rapid expansion of the Word Wide Web (WWW) greatly increases the information of effective transmission from heterogeneous individuals to various systems. Extensive research for information diffusion is introduced by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and empirical studies, unification and comparison of different theories and approaches are lacking, which impedes further advances. In this article, we review recent developments in information diffusion and discuss the major challenges. We compare and evaluate available models and algorithms to respectively investigate their physical roles and optimization designs. Potential impacts and future directions are discussed. We emphasize that information diffusion has great scientific depth and combines diverse research fields which makes it interesting for physicists as well as interdisciplinary researchers.
NASA Astrophysics Data System (ADS)
Jo, Sunhwan; Jiang, Wei
2015-12-01
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of
Research on non rigid registration algorithm of DCE-MRI based on mutual information and optical flow
NASA Astrophysics Data System (ADS)
Yu, Shihua; Wang, Rui; Wang, Kaiyu; Xi, Mengmeng; Zheng, Jiashuo; Liu, Hui
2015-07-01
Image matching plays a very important role in the field of medical image, while the two image registration methods based on the mutual information and the optical flow are very effective. The experimental results show that the two methods have their prominent advantages. The method based on mutual information is good for the overall displacement, while the method based on optical flow is very sensitive to small deformation. In the breast DCE-MRI images studied in this paper, there is not only overall deformation caused by the patient, but also non rigid small deformation caused by respiratory deformation. In view of the above situation, the single-image registration algorithms cannot meet the actual needs of complex situations. After a comprehensive analysis to the advantages and disadvantages of these two methods, this paper proposes a registration algorithm of combining mutual information with optical flow field, and applies subtraction images of the reference image and the floating image as the main criterion to evaluate the registration effect, at the same time, applies the mutual information between image sequence values as auxiliary criterion. With the test of the example, this algorithm has obtained a better accuracy and reliability in breast DCE-MRI image sequences.
Kim, Jinkwon; Shin, Hangsik
2016-01-01
The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
Manikandan, P; Ramyachitra, D; Banupriya, D
2016-04-15
Proteins show their functional activity by interacting with other proteins and forms protein complexes since it is playing an important role in cellular organization and function. To understand the higher order protein organization, overlapping is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Most of the clustering algorithms do not consider the weighted as well as overlapping complexes. In this research, Prorank based Fuzzy algorithm has been proposed to find the overlapping protein complexes. The Fuzzy detection algorithm is incorporated in the Prorank algorithm after ranking step to find the overlapping community. The proposed algorithm executes in an iterative manner to compute the probability of robust clusters. The proposed and the existing algorithms were tested on different datasets such as PPI-D1, PPI-D2, Collins, DIP, Krogan Core and Krogan-Extended, gene expression such as GSE7645, GSE22269, GSE26923, pathways such as Meiosis, MAPK, Cell Cycle, phenotypes such as Yeast Heterogeneous and Yeast Homogeneous datasets. The experimental results show that the proposed algorithm predicts protein complexes with better accuracy compared to other state of art algorithms. PMID:26809099
Manikandan, P; Ramyachitra, D; Banupriya, D
2016-04-15
Proteins show their functional activity by interacting with other proteins and forms protein complexes since it is playing an important role in cellular organization and function. To understand the higher order protein organization, overlapping is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Most of the clustering algorithms do not consider the weighted as well as overlapping complexes. In this research, Prorank based Fuzzy algorithm has been proposed to find the overlapping protein complexes. The Fuzzy detection algorithm is incorporated in the Prorank algorithm after ranking step to find the overlapping community. The proposed algorithm executes in an iterative manner to compute the probability of robust clusters. The proposed and the existing algorithms were tested on different datasets such as PPI-D1, PPI-D2, Collins, DIP, Krogan Core and Krogan-Extended, gene expression such as GSE7645, GSE22269, GSE26923, pathways such as Meiosis, MAPK, Cell Cycle, phenotypes such as Yeast Heterogeneous and Yeast Homogeneous datasets. The experimental results show that the proposed algorithm predicts protein complexes with better accuracy compared to other state of art algorithms.
Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen
2012-10-01
This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.
Deconvolution of complex spectra into components by the bee swarm algorithm
NASA Astrophysics Data System (ADS)
Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh
2016-05-01
The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.
A Lip Extraction Algorithm by Using Color Information Considering Obscurity
NASA Astrophysics Data System (ADS)
Shirasawa, Yoichi; Nishida, Makoto
This paper proposes a method for extracting lip shape and its location from sequential facial images by using color information. The proposed method has no need of extra information on a position nor a form in advance. It is also carried out without special conditions such as lipstick or lighting. Psychometric quantities of a metric hue angle, a metric hue difference and a rectangular coordinates, which are defined in CIE 1976 L*a*b* color space, are used for the extraction. The method employs fuzzy reasoning in order to consider obscurity in image data such as shade on the face. The experimental result indicate the effectiveness of the proposed method; 100 percent of facial images data was estimated a lip’s position, and about 94 percent of facial images data was extracted its shape.
Representing Uncertain Geographical Information with Algorithmic Map Caricatures
NASA Astrophysics Data System (ADS)
Brunsdon, Chris
2016-04-01
A great deal of geographical information - including the results ion data analysis - is imprecise in some way. For example the the results of geostatistical interpolation should consist not only of point estimates of the value of some quantity at points in space, but also of confidence intervals or standard errors of these estimators. Similarly, mappings of contour lines derived form such interpolations will also be characterised by uncertainty. However, most computerized cartography tools are designed to provide 'crisp' representations of geographical information, such as sharply drawn lines, or clearly delineated areas. In this talk, the use of 'fuzzy' or 'sketchy' cartographic tools will be demonstrated - where maps have a hand-drawn appearance and the degree of 'roughness' and other related characteristics can be used to convey the degree of uncertainty associated with estimated quantities being mapped. The tools used to do this are available as an R package, which will be described in the talk.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
Simple algorithm for computing the communication complexity of quantum communication processes
NASA Astrophysics Data System (ADS)
Hansen, A.; Montina, A.; Wolf, S.
2016-04-01
A two-party quantum communication process with classical inputs and outcomes can be simulated by replacing the quantum channel with a classical one. The minimal amount of classical communication required to reproduce the statistics of the quantum process is called its communication complexity. In the case of many instances simulated in parallel, the minimal communication cost per instance is called the asymptotic communication complexity. Previously, we reduced the computation of the asymptotic communication complexity to a convex minimization problem. In most cases, the objective function does not have an explicit analytic form, as the function is defined as the maximum over an infinite set of convex functions. Therefore, the overall problem takes the form of a minimax problem and cannot directly be solved by standard optimization methods. In this paper, we introduce a simple algorithm to compute the asymptotic communication complexity. For some special cases with an analytic objective function one can employ available convex-optimization libraries. In the tested cases our method turned out to be notably faster. Finally, using our method we obtain 1.238 bits as a lower bound on the asymptotic communication complexity of a noiseless quantum channel with the capacity of 1 qubit. This improves the previous bound of 1.208 bits.
Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm
Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin
2015-01-01
With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071
Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm.
Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin
2015-01-01
With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
NASA Astrophysics Data System (ADS)
Cocco, S.; Monasson, R.
2001-08-01
The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.
A multi-agent genetic algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Li, Zhangtao; Liu, Jing
2016-05-01
Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
PREFACE: Complex Networks: from Biology to Information Technology
NASA Astrophysics Data System (ADS)
Barrat, A.; Boccaletti, S.; Caldarelli, G.; Chessa, A.; Latora, V.; Motter, A. E.
2008-06-01
The field of complex networks is one of the most active areas in contemporary statistical physics. Ten years after seminal work initiated the modern study of networks, interest in the field is in fact still growing, as indicated by the ever increasing number of publications in network science. The reason for such a resounding success is most likely the simplicity and broad significance of the approach that, through graph theory, allows researchers to address a variety of different complex systems within a common framework. This special issue comprises a selection of contributions presented at the workshop 'Complex Networks: from Biology to Information Technology' held in July 2007 in Pula (Cagliari), Italy as a satellite of the general conference STATPHYS23. The contributions cover a wide range of problems that are currently among the most important questions in the area of complex networks and that are likely to stimulate future research. The issue is organised into four sections. The first two sections describe 'methods' to study the structure and the dynamics of complex networks, respectively. After this methodological part, the issue proceeds with a section on applications to biological systems. The issue closes with a section concentrating on applications to the study of social and technological networks. The first section, entitled Methods: The Structure, consists of six contributions focused on the characterisation and analysis of structural properties of complex networks: The paper Motif-based communities in complex networks by Arenas et al is a study of the occurrence of characteristic small subgraphs in complex networks. These subgraphs, known as motifs, are used to define general classes of nodes and their communities by extending the mathematical expression of the Newman-Girvan modularity. The same line of research, aimed at characterising network structure through the analysis of particular subgraphs, is explored by Bianconi and Gulbahce in Algorithm
The algorithmic complexity of multichannel EEGs is sensitive to changes in behavior.
Watanabe, T A A; Cellucci, C J; Kohegyi, E; Bashore, T R; Josiassen, R C; Greenbaun, N N; Rapp, P E
2003-01-01
Symbolic measures of complexity provide a quantitative characterization of the sequential structure of symbol sequences. Promising results from the application of these methods to the analysis of electroencephalographic (EEG) and event-related brain potential (ERP) activity have been reported. Symbolic measures used thus far have two limitations, however. First, because the value of complexity increases with the length of the message, it is difficult to compare signals of different epoch lengths. Second, these symbolic measures do not generalize easily to the multichannel case. We address these issues in studies in which both single and multichannel EEGs were analyzed using measures of signal complexity and algorithmic redundancy, the latter being defined as a sequence-sensitive generalization of Shannon's redundancy. Using a binary partition of EEG activity about the median, redundancy was shown to be insensitive to the size of the data set while being sensitive to changes in the subject's behavioral state (eyes open vs. eyes closed). The covariance complexity, calculated from the singular value spectrum of a multichannel signal, was also found to be sensitive to changes in behavioral state. Statistical separations between the eyes open and eyes closed conditions were found to decrease following removal of the 8- to 12-Hz content in the EEG, but still remained statistically significant. Use of symbolic measures in multivariate signal classification is described.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
Algorithmic information theory and the hidden variable question
NASA Technical Reports Server (NTRS)
Fuchs, Christopher
1992-01-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Algorithmic information theory and the hidden variable question
NASA Astrophysics Data System (ADS)
Fuchs, Christopher
1992-02-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Deciphering the Minimal Algorithm for Development and Information-genesis
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Tang, Chao; Li, Hao
During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.
Informational Complexity and Functional Activity of RNA Structures
Carothers, James M.; Oestreich, Stephanie C.; Davis, Jonathan H.
2004-01-01
Very little is known about the distribution of functional DNA, RNA, and protein molecules in sequence space. The question of how the number and complexity of distinct solutions to a particular biochemical problem varies with activity is an important aspect of this general problem. Here we present a comparison of the structures and activities of eleven distinct GTP-binding RNAs (aptamers). By experimentally measuring the amount of information required to specify each optimal binding structure, we show that defining a structure capable of 10-fold tighter binding requires approximately 10 additional bits of information. This increase in information content is equivalent to specifying the identity of five additional nucleotide positions and corresponds to an ∼1000-fold decrease in abundance in a sample of random sequences. We observe a similar relationship between structural complexity and activity in a comparison of two catalytic RNAs (ribozyme ligases), raising the possibility of a general relationship between the complexity of RNA structures and their functional activity. Describing how information varies with activity in other heteropolymers, both biological and synthetic, may lead to an objective means of comparing their functional properties. This approach could be useful in predicting the functional utility of novel heteropolymers. PMID:15099096
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
Information processing using a single dynamical node as complex system
Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.
2011-01-01
Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110
Link Prediction in Complex Networks: A Mutual Information Perspective
Tan, Fei; Xia, Yongxiang; Zhu, Boyao
2014-01-01
Topological properties of networks are widely applied to study the link-prediction problem recently. Common Neighbors, for example, is a natural yet efficient framework. Many variants of Common Neighbors have been thus proposed to further boost the discriminative resolution of candidate links. In this paper, we reexamine the role of network topology in predicting missing links from the perspective of information theory, and present a practical approach based on the mutual information of network structures. It not only can improve the prediction accuracy substantially, but also experiences reasonable computing complexity. PMID:25207920
Leliaert, Frederik; Verbruggen, Heroen; Wysor, Brian; De Clerck, Olivier
2009-10-01
DNA-based taxonomy provides a convenient and reliable tool for species delimitation, especially in organisms in which morphological discrimination is difficult or impossible, such as many algal taxa. A group with a long history of confusing species circumscriptions is the morphologically plastic Boodlea complex, comprising the marine green algal genera Boodlea, Cladophoropsis, Phyllodictyon and Struveopsis. In this study, we elucidate species boundaries in the Boodlea complex by analysing nrDNA internal transcribed spacer sequences from 175 specimens collected from a wide geographical range. Algorithmic methods of sequence-based species delineation were applied, including statistical parsimony network analysis, and a maximum likelihood approach that uses a mixed Yule-coalescent model and detects species boundaries based on differences in branching rates at the level of species and populations. Sequence analyses resulted in the recognition of 13 phylogenetic species, although we failed to detect sharp species boundaries, possibly as a result of incomplete reproductive isolation. We found considerable conflict between traditional and phylogenetic species definitions. Identical morphological forms were distributed in different clades (cryptic diversity), and at the same time most of the phylogenetic species contained a mixture of different morphologies (indicating intraspecific morphological variation). Sampling outside the morphological range of the Boodlea complex revealed that the enigmatic, sponge-associated Cladophoropsis (Spongocladia) vaucheriiformis, also falls within the Boodlea complex. Given the observed evolutionary complexity and nomenclatural problems associated with establishing a Linnaean taxonomy for this group, we propose to discard provisionally the misleading morphospecies and genus names, and refer to clade numbers within a single genus, Boodlea.
BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes
Lakizadeh, Amir; Jalili, Saeed
2016-01-01
Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706
BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes.
Lakizadeh, Amir; Jalili, Saeed
2016-01-01
Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Li, Bin
2016-03-01
Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
A gridless Euler/Navier-Stokes solution algorithm for complex two-dimensional applications
NASA Technical Reports Server (NTRS)
Batina, John T.
1992-01-01
The development of a gridless computational fluid dynamics (CFD) method for the solution of the two-dimensional Euler and Navier-Stokes equations is described. The method uses only clouds of points and does not require that the points be connected to form a grid as is necessary in conventional CFD algorithms. The gridless CFD approach appears to resolve the problems and inefficiencies encountered with structured or unstructured grid methods. As a result, the method offers the greatest potential for accurately and efficiently solving viscous flows about complex aircraft configurations. The method is described in detail, and calculations are presented for standard Euler and Navier-Stokes cases to assess the accuracy and efficiency of the capability.
A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries
Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P
2003-12-15
We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...
Integrated computational and conceptual solutions for complex environmental information management
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2016-06-01
This paper presents the recent results of the integration of computational and conceptual solutions for the complex case of environmental information management. The solution for the major goal of creating and developing long-term multi-disciplinary knowledge resources and conceptual and computational support was achieved by implementing and integrating key components. The key components are long-term knowledge resources providing required structures for universal knowledge creation, documentation, and preservation, universal multi-disciplinary and multi-lingual conceptual knowledge and classification, especially, references to Universal Decimal Classification (UDC), sustainable workflows for environmental information management, and computational support for dynamical use, processing, and advanced scientific computing with Integrated Information and Computing System (IICS) components and High End Computing (HEC) resources.
Encoding techniques for complex information structures in connectionist systems
NASA Technical Reports Server (NTRS)
Barnden, John; Srinivas, Kankanahalli
1990-01-01
Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.
Bayesian Case-deletion Model Complexity and Information Criterion
Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia
2015-01-01
We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578
Evolutionary ultimatum game on complex networks under incomplete information
NASA Astrophysics Data System (ADS)
Bo, Xianyu; Yang, Jianmei
2010-03-01
This paper studies the evolutionary ultimatum game on networks when agents have incomplete information about the strategies of their neighborhood agents. Our model assumes that agents may initially display low fairness behavior, and therefore, may have to learn and develop their own strategies in this unknown environment. The Genetic Algorithm Learning Classifier System (GALCS) is used in the model as the agent strategy learning rule. Aside from the Watts-Strogatz (WS) small-world network and its variations, the present paper also extends the spatial ultimatum game to the Barabási-Albert (BA) scale-free network. Simulation results show that the fairness level achieved is lower than in situations where agents have complete information about other agents’ strategies. The research results display that fairness behavior will always emerge regardless of the distribution of the initial strategies. If the strategies are randomly distributed on the network, then the long-term agent fairness levels achieved are very close given unchanged learning parameters. Neighborhood size also has little effect on the fairness level attained. The simulation results also imply that WS small-world and BA scale-free networks have different effects on the spatial ultimatum game. In ultimatum game on networks with incomplete information, the WS small-world network and its variations favor the emergence of fairness behavior slightly more than the BA network where agents are heterogeneously structured.
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
NASA Astrophysics Data System (ADS)
Zhang, Chun; Fei, Shu-Min; Zhou, Xing-Peng
2012-12-01
In this paper, we explore the technology of tracking a group of targets with correlated motions in a wireless sensor network. Since a group of targets moves collectively and is restricted within a limited region, it is not worth consuming scarce resources of sensors in computing the trajectory of each single target. Hence, in this paper, the problem is modeled as tracking a geographical continuous region covered by all targets. A tracking algorithm is proposed to estimate the region covered by the target group in each sampling period. Based on the locations of sensors and the azimuthal angle of arrival (AOA) information, the estimated region covering all the group members is obtained. Algorithm analysis provides the fundamental limits to the accuracy of localizing a target group. Simulation results show that the proposed algorithm is superior to the existing hull algorithm due to the reduction in estimation error, which is between 10% and 40% of the hull algorithm, with a similar density of sensors. And when the density of sensors increases, the localization accuracy of the proposed algorithm improves dramatically.
Combining spatial and spectral information to improve crop/weed discrimination algorithms
NASA Astrophysics Data System (ADS)
Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.
2012-01-01
Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.
Information search and decision making: effects of age and complexity on strategy use.
Queen, Tara L; Hess, Thomas M; Ennis, Gilda E; Dowd, Keith; Grühn, Daniel
2012-12-01
The impact of task complexity on information search strategy and decision quality was examined in a sample of 135 young, middle-aged, and older adults. We were particularly interested in the competing roles of fluid cognitive ability and domain knowledge and experience, with the former being a negative influence and the latter being a positive influence on older adults' performance. Participants utilized 2 decision matrices, which varied in complexity, regarding a consumer purchase. Using process tracing software and an algorithm developed to assess decision strategy, we recorded search behavior, strategy selection, and final decision. Contrary to expectations, older adults were not more likely than the younger age groups to engage in information-minimizing search behaviors in response to increases in task complexity. Similarly, adults of all ages used comparable decision strategies and adapted their strategies to the demands of the task. We also examined decision outcomes in relation to participants' preferences. Overall, it seems that older adults utilize simpler sets of information primarily reflecting the most valued attributes in making their choice. The results of this study suggest that older adults are adaptive in their approach to decision making and that this ability may benefit from accrued knowledge and experience.
Statistical physics of networks, information and complex systems
Ecke, Robert E
2009-01-01
In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.
Daneshmand, Hadi; Gomez-Rodriguez, Manuel; Song, Le; Schölkopf, Bernhard
2015-01-01
Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees? Despite the increasing availability of cascade-data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuous-time diffusion models using an ℓ1-regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe O(d3 log N) cascades, where d is the maximum number of parents of a node and N is the total number of nodes. Moreover, we develop a simple and efficient soft-thresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice. PMID:25932466
The algorithmic complexity of neural spike trains increases during focal seizures.
Rapp, P E; Zimmerman, I D; Vining, E P; Cohen, N; Albano, A M; Jiménez-Montaño, M A
1994-08-01
The interspike interval spike trains of spontaneously active cortical neurons can display nonrandom internal structure. The degree of nonrandom structure can be quantified and was found to decrease during focal epileptic seizures. Greater statistical discrimination between the two physiological conditions (normal vs seizure) was obtained with measurements of context-free grammar complexity than by measures of the distribution of the interspike intervals such as the mean interval, its standard deviation, skewness, or kurtosis. An examination of fixed epoch data sets showed that two factors contribute to the complexity: the firing rate and the internal structure of the spike train. However, calculations with randomly shuffled surrogates of the original data sets showed that the complexity is not completely determined by the firing rate. The sequence-sensitive structure of the spike train is a significant contributor. By combining complexity measurements with statistically related surrogate data sets, it is possible to classify neurons according to the dynamical structure of their spike trains. This classification could not have been made on the basis of conventional distribution-determined measures. Computations with more sophisticated kinds of surrogate data show that the structure observed using complexity measures cannot be attributed to linearly correlated noise or to linearly correlated noise transformed by a static monotonic nonlinearity. The patterns in spike trains appear to reflect genuine nonlinear structure. The limitations of these results are also discussed. The results presented in this article do not, of themselves, establish the presence of a fine-structure encoding of neural information.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
VS-APPLE: A Virtual Screening Algorithm Using Promiscuous Protein-Ligand Complexes.
Okuno, Tatsuya; Kato, Koya; Terada, Tomoki P; Sasai, Masaki; Chikenji, George
2015-06-22
As the number of structurally resolved protein-ligand complexes increases, the ligand-binding pockets of many proteins have been found to accommodate multiple different compounds. Effective use of these structural data is important for developing virtual screening (VS) methods that identify bioactive compounds. Here, we introduce a VS method, VS-APPLE (Virtual Screening Algorithm using Promiscuous Protein-Ligand complExes), based on promiscuous protein-ligand binding structures. In VS-APPLE, multiple ligands bound to a pocket are combined into a query template for screening. Both the structural match between a test compound and the multiple-ligand template and the possible collisions between the test compound and the target protein are evaluated by an efficient geometric hashing method. The performance of VS-APPLE was examined on a filtered, clustered version of the Directory of Useful Decoys data set. In Area Under the Curve analyses of this data set, VS-APPLE outperformed several popular screening programs. Judging from the performance of VS-APPLE, the structural data of promiscuous protein-ligand bindings could be further analyzed and exploited for developing VS methods.
Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.
Bae, Juhee; Watson, Benjamin
2014-12-01
In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator. PMID:26356911
Quantum-information processing in disordered and complex quantum systems
Sen, Aditi; Sen, Ujjwal; Ahufinger, Veronica; Briegel, Hans J.; Sanpera, Anna; Lewenstein, Maciej
2006-12-15
We study quantum information processing in complex disordered many body systems that can be implemented by using lattices of ultracold atomic gases and trapped ions. We demonstrate, first in the short range case, the generation of entanglement and the local realization of quantum gates in a disordered magnetic model describing a quantum spin glass. We show that in this case it is possible to achieve fidelities of quantum gates higher than in the classical case. Complex systems with long range interactions, such as ions chains or dipolar atomic gases, can be used to model neural network Hamiltonians. For such systems, where both long range interactions and disorder appear, it is possible to generate long range bipartite entanglement. We provide an efficient analytical method to calculate the time evolution of a given initial state, which in turn allows us to calculate its quantum correlations.
Analyzing complex networks evolution through Information Theory quantifiers
NASA Astrophysics Data System (ADS)
Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martín Gómez
2011-01-01
A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Niño/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.
Baldwin, C; Eliassi-Rad, T; Abdulla, G; Critchlow, T
2003-04-16
As scientific data sets grow exponentially in size, the need for scalable algorithms that heuristically partition the data increases. In this paper, we describe the three-step evolution of a hierarchical partitioning algorithm for large-scale spatio-temporal scientific data sets generated by massive simulations. The first version of our algorithm uses a simple top-down partitioning technique, which divides the data by using a four-way bisection of the spatio-temporal space. The shortcomings of this algorithm lead to the second version of our partitioning algorithm, which uses a bottom-up approach. In this version, a partition hierarchy is constructed by systematically agglomerating the underlying Cartesian grid that is placed on the data. Finally, the third version of our algorithm utilizes the intrinsic topology of the data given in the original scientific problem to build the partition hierarchy in a bottom-up fashion. Specifically, the topology is used to heuristically agglomerate the data at each level of the partition hierarchy. Despite the growing complexity in our algorithms, the third version of our algorithm builds partition hierarchies in less time and is able to build trees for larger size data sets as compared to the previous two versions.
NASA Astrophysics Data System (ADS)
Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido
2015-12-01
The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.
2010-07-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
Kinetics of the Dynamical Information Shannon Entropy for Complex Systems
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Yulmetyeva, D. G.
1999-08-01
Kinetic behaviour of dynamical information Shannon entropy is discussed for complex systems: physical systems with non-Markovian property and memory in correlation approximation, and biological and physiological systems with sequences of the Markovian and non-Markovian random noises. For the stochastic processes, a description of the information entropy in terms of normalized time correlation functions is given. The influence and important role of two mutually dependent channels of the entropy change, correlation (creation or generation of correlations) and anti-correlation (decay or annihilation of correlation) is discussed. The method developed here is also used in analysis of the density fluctuations in liquid cesium obtained from slow neutron scattering data, fractal kinetics of the long-range fluctuation in the short-time human memory and chaotic dynamics of R-R intervals of human ECG.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
Calculating partial expected value of perfect information via Monte Carlo sampling algorithms.
Brennan, Alan; Kharroubi, Samer; O'hagan, Anthony; Chilcott, Jim
2007-01-01
Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities. PMID:17761960
FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks.
Yang, Jing; Chen, Limin; Zhang, Jianpei
2015-01-01
It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification. PMID:26090857
ERIC Educational Resources Information Center
de Leeuw, L.
Sixty-four fifth and sixth-grade pupils were taught number series extrapolation by either an algorithm, fully prescribed problem-solving method or a heuristic, less prescribed method. The trained problems were within categories of two degrees of complexity. There were 16 subjects in each cell of the 2 by 2 design used. Aptitude Treatment…
NASA Astrophysics Data System (ADS)
Liao, Yen-Che; Kao, Honn; Rosenberger, Andreas; Hsu, Shu-Kun; Huang, Bor-Shouh
2012-06-01
Conventional earthquake location methods depend critically on the correct identification of seismic phases and their arrival times from seismograms. Accurate phase picking is particularly difficult for aftershocks that occur closely in time and space, mostly because of the ambiguity of correlating the same phase at different stations. In this study, we introduce an improved Source-Scanning Algorithm (ISSA) for the purpose of delineating the complex distribution of aftershocks without time-consuming and labour-intensive phase-picking procedures. The improvements include the application of a ground motion analyser to separate P and S waves, the automatic adjustment of time windows for 'brightness' calculation based on the scanning resolution and a modified brightness function to combine constraints from multiple phases. Synthetic experiments simulating a challenging scenario are conducted to demonstrate the robustness of the ISSA. The method is applied to a field data set selected from the ocean-bottom-seismograph records of an offshore aftershock sequence southwest of Taiwan. Although visual inspection of the seismograms is ambiguous, our ISSA analysis clearly delineates two events that can best explain the observed waveform pattern.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
NASA Astrophysics Data System (ADS)
Erlingis, J. M.; Gourley, J. J.; Kirstetter, P.; Anagnostou, E. N.; Kalogiros, J. A.; Anagnostou, M.
2015-12-01
An Intensive Observation Period (IOP) for the Integrated Precipitation and Hydrology Experiment (IPHEx), part of NASA's Ground Validation campaign for the Global Precipitation Measurement Mission satellite took place from May-June 2014 in the Smoky Mountains of western North Carolina. The National Severe Storms Laboratory's mobile dual-pol X-band radar, NOXP, was deployed in the Pigeon River Basin during this time and employed various scanning strategies, including more than 1000 Range Height Indicator (RHI) scans in coordination with another radar and research aircraft. Rain gauges and disdrometers were also positioned within the basin to verify precipitation estimates and estimation of microphysical parameters. The performance of the SCOP-ME post-processing algorithm on NOXP data is compared with real-time and near real-time precipitation estimates with varying spatial resolutions and quality control measures (Stage IV gauge-corrected radar estimates, Multi-Radar/Multi-Sensor System Quantitative Precipitation Estimates, and CMORPH satellite estimates) to assess the utility of a gap-filling radar in complex terrain. Additionally, the RHI scans collected in this IOP provide a valuable opportunity to examine the evolution of microphysical characteristics of convective and stratiform precipitation as they impinge on terrain. To further the understanding of orographically enhanced precipitation, multiple storms for which RHI data are available are considered.
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Lee, Jongha; Lee, Kangui; Sung, Younghun; Lee, SeungDeok
2012-03-01
The X-ray tomosynthesis that measures several low dose projections over a limited angular range has been investigated as an alternative method of X-ray mammography for breast cancer screening. An extension of the scan coverage increases the vertical resolution by mitigating the interplane blurring. The implementation of a wide angle tomosynthesis equipment, however, may not be straightforward, mainly due to the image deterioration from the statistical noise in exterior projections. In this paper, we adopt the voltage modulation scheme to enlarge the coverage of the tomosynthesis scan. The higher tube voltages are used for outer angles, which offers the sufficient penetrating power for outlying frames in which the pathway of X-ray photons is elongated. To reconstruct 3D information from voltage modulated projections, we propose a novel algorithm, named information theoretic discrepancy based iterative reconstruction (IDIR) algorithm, which allows to account for the polychromatic acquisition model. The generalized information theoretic discrepancy (GID) is newly employed as the objective function. Using particular features of the GID, the cost function is derived in terms of imaginary variables with energy dependency, which leads to a tractable optimization problem without using the monochromatic approximation. In preliminary experiments using simulated and experimental equipment, the proposed imaging architecture and IDIR algorithm showed superior performances over conventional approaches.
Sera White
2012-04-01
This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.
Bagos, Pantelis G; Liakopoulos, Theodore D; Hamodrakas, Stavros J
2006-01-01
Background Hidden Markov Models (HMMs) have been extensively used in computational molecular biology, for modelling protein and nucleic acid sequences. In many applications, such as transmembrane protein topology prediction, the incorporation of limited amount of information regarding the topology, arising from biochemical experiments, has been proved a very useful strategy that increased remarkably the performance of even the top-scoring methods. However, no clear and formal explanation of the algorithms that retains the probabilistic interpretation of the models has been presented so far in the literature. Results We present here, a simple method that allows incorporation of prior topological information concerning the sequences at hand, while at the same time the HMMs retain their full probabilistic interpretation in terms of conditional probabilities. We present modifications to the standard Forward and Backward algorithms of HMMs and we also show explicitly, how reliable predictions may arise by these modifications, using all the algorithms currently available for decoding HMMs. A similar procedure may be used in the training procedure, aiming at optimizing the labels of the HMM's classes, especially in cases such as transmembrane proteins where the labels of the membrane-spanning segments are inherently misplaced. We present an application of this approach developing a method to predict the transmembrane regions of alpha-helical membrane proteins, trained on crystallographically solved data. We show that this method compares well against already established algorithms presented in the literature, and it is extremely useful in practical applications. Conclusion The algorithms presented here, are easily implemented in any kind of a Hidden Markov Model, whereas the prediction method (HMM-TM) is freely available for academic users at , offering the most advanced decoding options currently available. PMID:16597327
Methods of Information Geometry to model complex shapes
NASA Astrophysics Data System (ADS)
De Sanctis, A.; Gattone, S. A.
2016-09-01
In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.
A complex network model for seismicity based on mutual information
NASA Astrophysics Data System (ADS)
Jiménez, Abigail
2013-05-01
Seismicity is the product of the interaction between the different parts of the lithosphere. Here, we model each part of the Earth as a cell that is constantly communicating its state to its environment. As a neuron is stimulated and produces an output, the different parts of the lithosphere are constantly stimulated by both other cells and the ductile part of the lithosphere, and produce an output in the form of a stress transfer or an earthquake. This output depends on the properties of each part of the Earth’s crust and the magnitude of the inputs. In this study, we propose an approach to the quantification of this communication, with the aid of the Information Theory, and model seismicity as a Complex Network. We have used data from California, and this new approach gives a better understanding of the processes involved in the formation of seismic patterns in that region.
A tool for filtering information in complex systems
NASA Astrophysics Data System (ADS)
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-07-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.
A tool for filtering information in complex systems.
Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N
2005-07-26
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.
NASA Astrophysics Data System (ADS)
Dittwald, Piotr; Valkenborg, Dirk
2014-04-01
Recently, an elegant iterative algorithm called BRAIN ( Baffling Recursive Algorithm for Isotopic distributio N calculations) was presented. The algorithm is based on the classic polynomial method for calculating aggregated isotope distributions, and it introduces algebraic identities using Newton-Girard and Viète's formulae to solve the problem of polynomial expansion. Due to the iterative nature of the BRAIN method, it is a requirement that the calculations start from the lightest isotope variant. As such, the complexity of BRAIN scales quadratically with the mass of the putative molecule, since it depends on the number of aggregated peaks that need to be calculated. In this manuscript, we suggest two improvements of the algorithm to decrease both time and memory complexity in obtaining the aggregated isotope distribution. We also illustrate a concept to represent the element isotope distribution in a generic manner. This representation allows for omitting the root calculation of the element polynomial required in the original BRAIN method. A generic formulation for the roots is of special interest for higher order element polynomials such that root finding algorithms and its inaccuracies can be avoided.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.
NASA Astrophysics Data System (ADS)
Jiang, Zhuo; Xie, Chengjun
2013-12-01
This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.
Enabling complex queries to drug information sources through functional composition.
Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier
2013-01-01
Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs. PMID:23920645
Enabling complex queries to drug information sources through functional composition.
Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier
2013-01-01
Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs.
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information
A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems
NASA Astrophysics Data System (ADS)
Chan, Tsun-Shan; Kuo, Jen-Chih; Wu, An-Yeu (Andy)
2002-12-01
The discrete multitone (DMT) modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL) and very-high-speed digital subscriber lines (VDSL). Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs). In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.
2011-01-01
Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.
Eichler, Gabriel S; Reimers, Mark; Kane, David; Weinstein, John N
2007-01-01
Interpretation of microarray data remains a challenge, and most methods fail to consider the complex, nonlinear regulation of gene expression. To address that limitation, we introduce Learner of Functional Enrichment (LeFE), a statistical/machine learning algorithm based on Random Forest, and demonstrate it on several diverse datasets: smoker/never smoker, breast cancer classification, and cancer drug sensitivity. We also compare it with previously published algorithms, including Gene Set Enrichment Analysis. LeFE regularly identifies statistically significant functional themes consistent with known biology. PMID:17845722
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Power laws of complex systems from extreme physical information
NASA Astrophysics Data System (ADS)
Frieden, B. Roy; Gatenby, Robert A.
2005-09-01
Many complex systems obey allometric, or power, laws y=Yxa . Here y⩾0 is the measured value of some system attribute a , Y⩾0 is a constant, and x is a stochastic variable. Remarkably, for many living systems the exponent a is limited to values n/4 , n=0,±1,±2,… . Here x is the mass of a randomly selected creature in the population. These quarter-power laws hold for many attributes, such as pulse rate (n=-1) . Allometry has, in the past, been theoretically justified on a case-by-case basis. An ultimate goal is to find a common cause for allometry of all types and for both living and nonliving systems. The principle I-J=extremum of extreme physical information is found to provide such a cause. It describes the flow of Fisher information J→I from an attribute value a on the cell level to its exterior observation y . Data y are formed via a system channel function y≡f(x,a) , with f(x,a) to be found. Extremizing the difference I-J through variation of f(x,a) results in a general allometric law f(x,a)≡y=Yxa . Darwinian evolution is presumed to cause a second extremization of I-J , now with respect to the choice of a . The solution is a=n/4 , n=0,±1,±2… , defining the particular powers of biological allometry. Under special circumstances, the model predicts that such biological systems are controlled by only two distinct intracellular information sources. These sources are conjectured to be cellular DNA and cellular transmembrane ion gradients
Power laws of complex systems from extreme physical information.
Frieden, B Roy; Gatenby, Robert A
2005-09-01
Many complex systems obey allometric, or power, laws y=Y x(a) . Here y > or = 0 is the measured value of some system attribute a , Y> or =0 is a constant, and x is a stochastic variable. Remarkably, for many living systems the exponent a is limited to values n/4 , n=0, +/-1, +/-2.... Here x is the mass of a randomly selected creature in the population. These quarter-power laws hold for many attributes, such as pulse rate (n=-1) . Allometry has, in the past, been theoretically justified on a case-by-case basis. An ultimate goal is to find a common cause for allometry of all types and for both living and nonliving systems. The principle I-J=extremum of extreme physical information is found to provide such a cause. It describes the flow of Fisher information J-->I from an attribute value a on the cell level to its exterior observation y . Data y are formed via a system channel function y identical to f (x,a) , with f (x,a) to be found. Extremizing the difference I-J through variation of f (x,a) results in a general allometric law f (x,a) identical to y=Y x(a) . Darwinian evolution is presumed to cause a second extremization of I-J , now with respect to the choice of a . The solution is a=n/4 , n=0,+/-1,+/-2..., defining the particular powers of biological allometry. Under special circumstances, the model predicts that such biological systems are controlled by only two distinct intracellular information sources. These sources are conjectured to be cellular DNA and cellular transmembrane ion gradients. PMID:16241509
NASA Astrophysics Data System (ADS)
Ayazi, S. M.; Mashhorroudi, M. F.; Ghorbani, M.
2014-10-01
Among the main issues in the theory of geometric grids on spatial information systems, is the problem of finding the shortest path routing between two points. In this paper tried to using the graph theory and A* algorithms in transport management, the optimal method to find the shortest path with shortest time condition to be reviewed. In order to construct a graph that consists of a network of pathways and modelling of physical and phasing area, the shortest path routes, elected with the use of the algorithm is modified A*.At of the proposed method node selection Examining angle nodes the desired destination node and the next node is done. The advantage of this method is that due to the elimination of some routes, time of route calculation is reduced.
Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information
Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li
2016-01-01
Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918
Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.
Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li
2016-01-01
Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
A new FOD recognition algorithm based on multi-source information fusion and experiment analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Xiao, Gang
2011-08-01
Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.
Scale-free properties of information flux networks in genetic algorithms
NASA Astrophysics Data System (ADS)
Wu, Jieyu; Shao, Xinyu; Li, Jinhang; Huang, Gang
2012-02-01
In this study, we present empirical analysis of statistical properties of mating networks in genetic algorithms (GAs). Under the framework of GAs, we study a class of interaction network model-information flux network (IFN), which describes the information flow among generations during evolution process. The IFNs are found to be scale-free when the selection operator uses a preferential strategy rather than a random. The topology structure of IFN is remarkably affected by operations used in genetic algorithms. The experimental results suggest that the scaling exponent of the power-law degree distribution is shown to decrease when crossover rate increases, but increase when mutation rate increases, and the reason may be that high crossover rate leads to more edges that are shared between nodes and high mutation rate leads to many individuals in a generation possessing low fitness. The magnitude of the out-degree exponent is always more than the in-degree exponent for the systems tested. These results may provide a new viewpoint with which to view GAs and guide the dissemination process of genetic information throughout a population.
Devine, Sean D
2016-02-01
Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. PMID:26723233
Devine, Sean D
2016-02-01
Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record: information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
2011-01-01
Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549
DasGupta, Bhaskar; Enciso, German Andres; Sontag, Eduardo; Zhang, Yi
2007-01-01
A useful approach to the mathematical analysis of large-scale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graph-theoretic language, the problems can be recast in terms of maximal sign-consistent subgraphs. The theoretical results include polynomial-time approximation algorithms as well as constant-ratio inapproximability results. One of the algorithms, which has a worst-case guarantee of 87.9% from optimality, is based on the semidefinite programming relaxation approach of Goemans-Williamson [Goemans, M., Williamson, D., 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42 (6), 1115-1145]. The algorithm was implemented and tested on a Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model, and it was found to perform close to optimally.
NASA Technical Reports Server (NTRS)
Wang, Lui; Valenzuela-Rendon, Manuel
1993-01-01
The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.
Applying complexity theory: a review to inform evaluation design.
Walton, Mat
2014-08-01
Complexity theory has increasingly been discussed and applied within evaluation literature over the past decade. This article reviews the discussion and use of complexity theory within academic journal literature. The aim is to identify the issues to be considered when applying complexity theory to evaluation. Reviewing 46 articles, two groups of themes are identified. The first group considers implications of applying complexity theory concepts for defining evaluation purpose, scope and units of analysis. The second group of themes consider methodology and method. Results provide a starting point for a configuration of an evaluation approach consistent with complexity theory, whilst also identifying a number of design considerations to be resolved within evaluation planning.
Network algorithmics and the emergence of information integration in cortical models
NASA Astrophysics Data System (ADS)
Nathan, Andre; Barbosa, Valmir C.
2011-07-01
An information-theoretic framework known as integrated information theory (IIT) has been introduced recently for the study of the emergence of consciousness in the brain [D. Balduzzi and G. Tononi, PLoS Comput. Biol.1553-734X10.1371/journal.pcbi.1000091 4, e1000091 (2008)]. IIT purports that this phenomenon is to be equated with the generation of information by the brain surpassing the information that the brain’s constituents already generate independently of one another. IIT is not fully plausible in its modeling assumptions nor is it testable due to severe combinatorial growth embedded in its key definitions. Here we introduce an alternative to IIT which, while inspired in similar information-theoretic principles, seeks to address some of IIT’s shortcomings to some extent. Our alternative framework uses the same network-algorithmic cortical model we introduced earlier [A. Nathan and V. C. Barbosa, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.021916 81, 021916 (2010)] and, to allow for somewhat improved testability relative to IIT, adopts the well-known notions of information gain and total correlation applied to a set of variables representing the reachability of neurons by messages in the model’s dynamics. We argue that these two quantities relate to each other in such a way that can be used to quantify the system’s efficiency in generating information beyond that which does not depend on integration. We give computational results on our cortical model and on variants thereof that are either structurally random in the sense of an Erdős-Rényi random directed graph or structurally deterministic. We have found that our cortical model stands out with respect to the others in the sense that many of its instances are capable of integrating information more efficiently than most of those others’ instances.
Clark, G A
2004-06-08
In general, the Phase Retrieval from Modulus problem is very difficult. In this report, we solve the difficult, but somewhat more tractable case in which we constrain the solution to a minimum phase reconstruction. We exploit the real-and imaginary part sufficiency properties of the Fourier and Hilbert Transforms of causal sequences to develop an algorithm for reconstructing spectral phase given only spectral modulus. The algorithm uses homeomorphic signal processing methods with the complex cepstrum. The formal problem of interest is: Given measurements of only the modulus {vert_bar}H(k){vert_bar} (no phase) of the Discrete Fourier Transform (DFT) of a real, finite-length, stable, causal time domain signal h(n), compute a minimum phase reconstruction {cflx h}(n) of the signal. Then compute the phase of {cflx h}(n) using a DFT, and exploit the result as an estimate of the phase of h(n). The development of the algorithm is quite involved, but the final algorithm and its implementation are very simple. This work was motivated by a Phase Retrieval from Modulus Problem that arose in LLNL Defense Sciences Engineering Division (DSED) projects in lightning protection for buildings. The measurements are limited to modulus-only spectra from a spectrum analyzer. However, it is desired to perform system identification on the building to compute impulse responses and transfer functions that describe the amount of lightning energy that will be transferred from the outside of the building to the inside. This calculation requires knowledge of the entire signals (both modulus and phase). The algorithm and software described in this report are proposed as an approach to phase retrieval that can be used for programmatic needs. This report presents a brief tutorial description of the mathematical problem and the derivation of the phase retrieval algorithm. The efficacy of the theory is demonstrated using simulated signals that meet the assumptions of the algorithm. We see that for
NASA Astrophysics Data System (ADS)
Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco
2016-04-01
Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.
A geometry-based adaptive unstructured grid generation algorithm for complex geological media
NASA Astrophysics Data System (ADS)
Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh
2014-07-01
In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1987-01-01
The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng
2015-10-01
The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
Shahrudin, Shahriza
2015-01-01
This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs) which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM-) LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity. PMID:25802839
A NEW FRAMEWORK FOR URBAN SUSTAINABILITY ASSESSMENTS: LINKING COMPLEXITY, INFORMATION AND POLICY
Urban systems emerge as distinct entities from the complex interactions among social, economic and cultural attributes, and information, energy and material stocks and flows that operate on different temporal and spatial scales. Such complexity poses a challenge to identify the...
Application of Fisher Information to Complex Dynamic Systems (Tucson)
Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...
Application of Fisher Information to Complex Dynamic Systems
Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas; Skou Cordua, Knud; Caroline Looms, Majken; Mosegaard, Klaus
2013-03-01
From a probabilistic point-of-view, the solution to an inverse problem can be seen as a combination of independent states of information quantified by probability density functions. Typically, these states of information are provided by a set of observed data and some a priori information on the solution. The combined states of information (i.e. the solution to the inverse problem) is a probability density function typically referred to as the a posteriori probability density function. We present a generic toolbox for Matlab and Gnu Octave called SIPPI that implements a number of methods for solving such probabilistically formulated inverse problems by sampling the a posteriori probability density function. In order to describe the a priori probability density function, we consider both simple Gaussian models and more complex (and realistic) a priori models based on higher order statistics. These a priori models can be used with both linear and non-linear inverse problems. For linear inverse Gaussian problems we make use of least-squares and kriging-based methods to describe the a posteriori probability density function directly. For general non-linear (i.e. non-Gaussian) inverse problems, we make use of the extended Metropolis algorithm to sample the a posteriori probability density function. Together with the extended Metropolis algorithm, we use sequential Gibbs sampling that allow computationally efficient sampling of complex a priori models. The toolbox can be applied to any inverse problem as long as a way of solving the forward problem is provided. Here we demonstrate the methods and algorithms available in SIPPI. An application of SIPPI, to a tomographic cross borehole inverse problems, is presented in a second part of this paper.
Breska, Assaf; Ben-Shakhar, Gershon; Gronau, Nurit
2012-09-01
We examined whether the Concealed Information Test (CIT) may be used when the critical details are unavailable to investigators (the Searching CIT [SCIT]). This use may have important applications in criminal investigations (e.g., finding the location of a murder weapon) and in security-related threats (e.g., detecting individuals and groups suspected in planning a terror attack). Two classes of algorithms designed to detect the critical items and classify individuals in the SCIT were examined. The 1st class was based on averaging responses across subjects to identify critical items and on averaging responses across the identified critical items to identify knowledgeable subjects. The 2nd class used clustering methods based on the correlations between the response profiles of all subject pairs. We applied a principal component analysis to decompose the correlation matrix into its principal components and defined the detection score as the coefficient of each subject on the component that explained the largest portion of the variance. Reanalysis of 3 data sets from previous CIT studies demonstrated that in most cases the efficiency of differentiation between knowledgeable and unknowledgeable subjects in the SCIT (indexed by the area under the receiver operating characteristic curve) approached that of the standard CIT for both algorithms. We also examined the robustness of our results to variations in the number of knowledgeable and unknowledgeable subjects in the sample. This analysis demonstrated that the performance of our algorithms is relatively robust to changes in the number of individuals examined in each group, provided that at least 2 (but desirably 5 or more) knowledgeable examinees are included.
NASA Astrophysics Data System (ADS)
Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei
2016-04-01
Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.
2012-01-01
The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms. PMID:22413926
Knowledge-based navigation of complex information spaces
Burke, R.D.; Hammond, K.J.; Young, B.C.
1996-12-31
While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.
Algorithms for biomagnetic source imaging with prior anatomical and physiological information
Hughett, P W
1995-12-01
This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.
A tomographic algorithm to determine tip-tilt information from laser guide stars
NASA Astrophysics Data System (ADS)
Reeves, A. P.; Morris, T. J.; Myers, R. M.; Bharmal, N. A.; Osborn, J.
2016-06-01
Laser Guide Stars (LGS) have greatly increased the sky-coverage of Adaptive Optics (AO) systems. Due to the up-link turbulence experienced by LGSs, a Natural Guide Star (NGS) is still required, preventing full sky-coverage. We present a method of obtaining partial tip-tilt information from LGSs alone in multi-LGS tomographic LGS AO systems. The method of LGS up-link tip-tilt determination is derived using a geometric approach, then an alteration to the Learn and Apply algorithm for tomographic AO is made to accommodate up-link tip-tilt. Simulation results are presented, verifying that the technique shows good performance in correcting high altitude tip-tilt, but not that from low altitudes. We suggest that the method is combined with multiple far off-axis tip-tilt NGSs to provide gains in performance and sky-coverage over current tomographic AO systems.
Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2008-01-01
We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.
Algorithms for deriving crystallographic space-group information. II: Treatment of special positions
Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2001-10-05
Algorithms for the treatment of special positions in 3-dimensional crystallographic space groups are presented. These include an algorithm for the determination of the site-symmetry group given the coordinates of a point, an algorithm for the determination of the exact location of the nearest special position, an algorithm for the assignment of a Wyckoff letter given the site-symmetry group, and an alternative algorithm for the assignment of a Wyckoff letter given the coordinates of a point directly. All algorithms are implemented in ISO C++ and are integrated into the Computational Crystallography Toolbox. The source code is freely available.
Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A
2015-01-01
Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process.
Phenylketonuria and Complex Spatial Visualization: An Analysis of Information Processing.
ERIC Educational Resources Information Center
Brunner, Robert L.; And Others
1987-01-01
The study of the ability of 16 early treated phenylketonuric (PKU) patients (ages 6-23 years) to solve complex spatial problems suggested that choice of problem-solving strategy, attention span, and accuracy of mental representation may be affected in PKU patients, despite efforts to maintain well-controlled phenylalanine concentrations in the…
Scale effects on information content and complexity of streamflows
Technology Transfer Automated Retrieval System (TEKTRAN)
Understanding temporal and spatial variations of streamflows is important for flood forecasting, water resources management, and revealing interactions between hydrologic processes (e.g., precipitation, evapotranspiration, and soil water and groundwater flows.) The information theory has been used i...
Statistical complexity in the hydrological information from urbanizing basins
NASA Astrophysics Data System (ADS)
Jovanovic, T.; Mejia, A.; Siddique, R.; Gironas, J. A.
2014-12-01
Urbanizing basins (i.e. basins under urban growth) typify coupled human-natural (CHN) systems, which are said to be complex. Furthermore, the level of complexity of these basins could be assumed to depend on how much the natural environment has been disturbed. In this study we attempt to characterize these systems by quantifying their statistical complexity and its linkage with the degree of urbanization. To perform this quantification, we use both multifractal detrended fluctuation analysis (MDFA) and permutation entropy (PE). MDFA is used to determine long-term dependencies, the Hurst exponent, and the level of multifractality in the hydrological records. On the other hand, PE is used to characterize short-term dependencies and determine the degree of statistical complexity in the records using a metric that depends non-trivially on entropy. The MDFA and PE analysis were applied to long-term hydrologic records (streamflow, baseflow, and rainfall) from 20 urbanizing basins located in the metropolitan areas of Baltimore, Philadelphia, and Washington DC, US. Results show that streamflow in urbanizing basins displays scaling over a wide range of temporal scales, as well as multifractal properties. More relevantly, we found that the scaling and the strength of the multifractality tend to weaken as the basins become more urbanized (i.e. streamflow records become more similar to the driving rainfall forcing with increasing urbanization). This interpretation is supported by the non-significant dependency of baseflow on the amount of urban development in the basin. The PE analysis shows that the statistical complexity of streamflow decreases for the most urbanized basins while the entropy increases, thereby suggesting that streamflow become less structured and more random with increasing urbanization. Overall, this study illustrates the potential of the analysis performed and associated metrics to characterize the hydrological impact of urbanization.
Kwarciak, Kamil; Radom, Marcin; Formanowicz, Piotr
2016-04-01
The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called "one and many" assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called "one, two and many", one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip.
Algorithmic information content, Church-Turing thesis, physical entropy, and Maxwell's demon
Zurek, W.H.
1990-01-01
Measurements convert alternative possibilities of its potential outcomes into the definiteness of the record'' -- data describing the actual outcome. The resulting decrease of statistical entropy has been, since the inception of the Maxwell's demon, regarded as a threat to the second law of thermodynamics. For, when the statistical entropy is employed as the measure of the useful work which can be extracted from the system, its decrease by the information gathering actions of the observer would lead one to believe that, at least from the observer's viewpoint, the second law can be violated. I show that the decrease of ignorance does not necessarily lead to the lowering of disorder of the measured physical system. Measurements can only convert uncertainty (quantified by the statistical entropy) into randomness of the outcome (given by the algorithmic information content of the data). The ability to extract useful work is measured by physical entropy, which is equal to the sum of these two measures of disorder. So defined physical entropy is, on the average, constant in course of the measurements carried out by the observer on an equilibrium system. 27 refs., 6 figs.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered. PMID:24922548
Complex network structure of musical compositions: Algorithmic generation of appealing music
NASA Astrophysics Data System (ADS)
Liu, Xiao Fan; Tse, Chi K.; Small, Michael
2010-01-01
In this paper we construct networks for music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurring connections. We analyze classical music from Bach, Mozart, Chopin, as well as other types of music such as Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. We conjecture that preserving the universal network properties is a necessary step in artificial composition of music. Power-law exponents of node degree, node strength and/or edge weight distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be composed artificially using a controlled random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. By generating a large number of compositions, we find that this algorithm generates music which has the necessary qualities to be subjectively judged as appealing.
Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan
2015-10-01
The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions.
Genes, information and sense: complexity and knowledge retrieval.
Sadovsky, Michael G; Putintseva, Julia A; Shchepanovsky, Alexander S
2008-06-01
Information capacity of nucleotide sequences measures the unexpectedness of a continuation of a given string of nucleotides, thus having a sound relation to a variety of biological issues. A continuation is defined in a way maximizing the entropy of the ensemble of such continuations. The capacity is defined as a mutual entropy of real frequency dictionary of a sequence with respect to the one bearing the most expected continuations; it does not depend on the length of strings contained in a dictionary. Various genomes exhibit a multi-minima pattern of the dependence of information capacity on the string length, thus reflecting an order within a sequence. The strings with significant deviation of an expected frequency from the real one are the words of increased information value. Such words exhibit a non-random distribution alongside a sequence, thus making it possible to retrieve the correlation between a structure, and a function encoded within a sequence.
Thermodynamic aspects of information transfer in complex dynamical systems.
Cafaro, Carlo; Ali, Sean Alan; Giffin, Adom
2016-02-01
From the Horowitz-Esposito stochastic thermodynamical description of information flows in dynamical systems [J. M. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014)], it is known that while the second law of thermodynamics is satisfied by a joint system, the entropic balance for the subsystems is adjusted by a term related to the mutual information exchange rate between the two subsystems. In this article, we present a quantitative discussion of the conceptual link between the Horowitz-Esposito analysis and the Liang-Kleeman work on information transfer between dynamical system components [X. S. Liang and R. Kleeman, Phys. Rev. Lett. 95, 244101 (2005)]. In particular, the entropic balance arguments employed in the two approaches are compared. Notwithstanding all differences between the two formalisms, our work strengthens the Liang-Kleeman heuristic balance reasoning by showing its formal analogy with the recent Horowitz-Esposito thermodynamic balance arguments. PMID:26986295
Genes, information and sense: complexity and knowledge retrieval.
Sadovsky, Michael G; Putintseva, Julia A; Shchepanovsky, Alexander S
2008-06-01
Information capacity of nucleotide sequences measures the unexpectedness of a continuation of a given string of nucleotides, thus having a sound relation to a variety of biological issues. A continuation is defined in a way maximizing the entropy of the ensemble of such continuations. The capacity is defined as a mutual entropy of real frequency dictionary of a sequence with respect to the one bearing the most expected continuations; it does not depend on the length of strings contained in a dictionary. Various genomes exhibit a multi-minima pattern of the dependence of information capacity on the string length, thus reflecting an order within a sequence. The strings with significant deviation of an expected frequency from the real one are the words of increased information value. Such words exhibit a non-random distribution alongside a sequence, thus making it possible to retrieve the correlation between a structure, and a function encoded within a sequence. PMID:18443840
Van Beurden, Eric K; Kia, Annie M; Zask, Avigdor; Dietrich, Uta; Rose, Lauren
2013-03-01
Health promotion addresses issues from the simple (with well-known cause/effect links) to the highly complex (webs and loops of cause/effect with unpredictable, emergent properties). Yet there is no conceptual framework within its theory base to help identify approaches appropriate to the level of complexity. The default approach favours reductionism--the assumption that reducing a system to its parts will inform whole system behaviour. Such an approach can yield useful knowledge, yet is inadequate where issues have multiple interacting causes, such as social determinants of health. To address complex issues, there is a need for a conceptual framework that helps choose action that is appropriate to context. This paper presents the Cynefin Framework, informed by complexity science--the study of Complex Adaptive Systems (CAS). It introduces key CAS concepts and reviews the emergence and implications of 'complex' approaches within health promotion. It explains the framework and its use with examples from contemporary practice, and sets it within the context of related bodies of health promotion theory. The Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies and avoid the pitfalls of applying reductionist approaches to complex situations. The urgency to address critical issues such as climate change and the social determinants of health calls for us to engage with complexity science. The Cynefin Framework helps practitioners make the shift, and enables those already engaged in complex approaches to communicate the value and meaning of their work in a system that privileges reductionist approaches. PMID:22128193
Van Beurden, Eric K; Kia, Annie M; Zask, Avigdor; Dietrich, Uta; Rose, Lauren
2013-03-01
Health promotion addresses issues from the simple (with well-known cause/effect links) to the highly complex (webs and loops of cause/effect with unpredictable, emergent properties). Yet there is no conceptual framework within its theory base to help identify approaches appropriate to the level of complexity. The default approach favours reductionism--the assumption that reducing a system to its parts will inform whole system behaviour. Such an approach can yield useful knowledge, yet is inadequate where issues have multiple interacting causes, such as social determinants of health. To address complex issues, there is a need for a conceptual framework that helps choose action that is appropriate to context. This paper presents the Cynefin Framework, informed by complexity science--the study of Complex Adaptive Systems (CAS). It introduces key CAS concepts and reviews the emergence and implications of 'complex' approaches within health promotion. It explains the framework and its use with examples from contemporary practice, and sets it within the context of related bodies of health promotion theory. The Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies and avoid the pitfalls of applying reductionist approaches to complex situations. The urgency to address critical issues such as climate change and the social determinants of health calls for us to engage with complexity science. The Cynefin Framework helps practitioners make the shift, and enables those already engaged in complex approaches to communicate the value and meaning of their work in a system that privileges reductionist approaches.
ERIC Educational Resources Information Center
Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.
1997-01-01
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
NASA Astrophysics Data System (ADS)
Johar, F. M.; Azmin, F. A.; Shibghatullah, A. S.; Suaidi, M. K.; Ahmad, B. H.; Abd Aziz, M. Z. A.; Salleh, S. N.; Shukor, M. Md
2014-04-01
Attenuation of GSM, GPS and personal communication signal leads to poor communication inside the building using regular shapes of energy saving glass coating. Thus, the transmission is very low. A brand new type of band pass frequency selective surface (FSS) for energy saving glass application is presented in this paper for one unit cell. Numerical Periodic Method of Moment approach according to a previous study has been applied to determine the new optimum design of one unit cell energy saving glass coating structure. Optimization technique based on the Genetic Algorithm (GA) is used to obtain an improved in return loss and transmission signal. The unit cell of FSS is designed and simulated using the CST Microwave Studio software at based on industrial, scientific and medical bands (ISM). A unique and irregular shape of an energy saving glass coating structure is obtained with lower return loss and improved transmission coefficient.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Defining and Detecting Complex Peak Relationships in Mass Spectral Data: The Mz.unity Algorithm.
Mahieu, Nathaniel G; Spalding, Jonathan L; Gelman, Susan J; Patti, Gary J
2016-09-20
Analysis of a single analyte by mass spectrometry can result in the detection of more than 100 degenerate peaks. These degenerate peaks complicate spectral interpretation and are challenging to annotate. In mass spectrometry-based metabolomics, this degeneracy leads to inflated false discovery rates, data sets containing an order of magnitude more features than analytes, and an inefficient use of resources during data analysis. Although software has been introduced to annotate spectral degeneracy, current approaches are unable to represent several important classes of peak relationships. These include heterodimers and higher complex adducts, distal fragments, relationships between peaks in different polarities, and complex adducts between features and background peaks. Here we outline sources of peak degeneracy in mass spectra that are not annotated by current approaches and introduce a software package called mz.unity to detect these relationships in accurate mass data. Using mz.unity, we find that data sets contain many more complex relationships than we anticipated. Examples include the adduct of glutamate and nicotinamide adenine dinucleotide (NAD), fragments of NAD detected in the same or opposite polarities, and the adduct of glutamate and a background peak. Further, the complex relationships we identify show that several assumptions commonly made when interpreting mass spectral degeneracy do not hold in general. These contributions provide new tools and insight to aid in the annotation of complex spectral relationships and provide a foundation for improved data set identification. Mz.unity is an R package and is freely available at https://github.com/nathaniel-mahieu/mz.unity as well as our laboratory Web site http://pattilab.wustl.edu/software/ .
How Information Visualization Systems Change Users' Understandings of Complex Data
ERIC Educational Resources Information Center
Allendoerfer, Kenneth Robert
2009-01-01
User-centered evaluations of information systems often focus on the usability of the system rather its usefulness. This study examined how a using an interactive knowledge-domain visualization (KDV) system affected users' understanding of a domain. Interactive KDVs allow users to create graphical representations of domains that depict important…
ERIC Educational Resources Information Center
Puerta Melguizo, Mari Carmen; Vidya, Uti; van Oostendorp, Herre
2012-01-01
We studied the effects of menu type, navigation path complexity and spatial ability on information retrieval performance and web disorientation or lostness. Two innovative aspects were included: (a) navigation path relevance and (b) information gathering tasks. As expected we found that, when measuring aspects directly related to navigation…
Dissemination of information in complex networks with congestion
NASA Astrophysics Data System (ADS)
Cholvi, Vicent
2006-07-01
We address the problem of message transfer in complex networks with congestion. We propose a new strategy aimed at improving routing efficiency. Such a strategy, contrary to the shortest available path length from a given source to its destination (perhaps the most widely analyzed routing strategy), takes into account the congestion of nodes and can be deployed, with a minimal overhead, on top of it. Our results show that, by distributing more homogeneously the congestion of nodes, it significantly reduces the average network load as well as the collapse point.
Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong
2015-07-01
Traditional BOTDR optical fiber sensing system uses single channel sensing fiber to measure the information features. Uncontrolled factors such as cross-sensitivity can lead to a lower scattering spectrum fitting precision and make the information analysis deflection get worse. Therefore, a BOTDR system for detecting the multichannel sensor information at the same time is proposed. Also it provides a scattering spectrum analysis method for multichannel Brillouin optical time-domain reflection (BOT-DR) sensing system in order to extract high precision spectrum feature. This method combines the three times data fusion (TTDF) and the cuckoo Newton search (CNS) algorithm. First, according to the rule of Dixon and Grubbs criteria, the method uses the ability of TTDF algorithm in data fusion to eliminate the influence of abnormal value and reduce the error signal. Second, it uses the Cuckoo Newton search algorithm to improve the spectrum fitting and enhance the accuracy of Brillouin scattering spectrum information analysis. We can obtain the global optimal solution by smart cuckoo search. By using the optimal solution as the initial value of Newton algorithm for local optimization, it can ensure the spectrum fitting precision. The information extraction at different linewidths is analyzed in temperature information scattering spectrum under the condition of linear weight ratio of 1:9. The variances of the multichannel data fusion is about 0.0030, the center frequency of scattering spectrum is 11.213 GHz and the temperature error is less than 0.15 K. Theoretical analysis and simulation results show that the algorithm can be used in multichannel distributed optical fiber sensing system based on Brillouin optical time domain reflection. It can improve the accuracy of multichannel sensing signals and the precision of Brillouin scattering spectrum analysis effectively. PMID:26717729
Chen, Yanming; Zhao, Qingjie
2015-01-01
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338
Chen, Yanming; Zhao, Qingjie
2015-05-05
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms.
Chun, Se Young
2016-03-01
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855
Chen, Yanming; Zhao, Qingjie
2015-01-01
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Supramolecular chemistry: from molecular information towards self-organization and complex matter
NASA Astrophysics Data System (ADS)
Lehn, Jean-Marie
2004-03-01
Molecular chemistry has developed a wide range of very powerful procedures for constructing ever more sophisticated molecules from atoms linked by covalent bonds. Beyond molecular chemistry lies supramolecular chemistry, which aims at developing highly complex chemical systems from components interacting via non-covalent intermolecular forces. By the appropriate manipulation of these interactions, supramolecular chemistry became progressively the chemistry of molecular information, involving the storage of information at the molecular level, in the structural features, and its retrieval, transfer, and processing at the supramolecular level, through molecular recognition processes operating via specific interactional algorithms. This has paved the way towards apprehending chemistry also as an information science. Numerous receptors capable of recognizing, i.e. selectively binding, specific substrates have been developed, based on the molecular information stored in the interacting species. Suitably functionalized receptors may perform supramolecular catalysis and selective transport processes. In combination with polymolecular organization, recognition opens ways towards the design of molecular and supramolecular devices based on functional (photoactive, electroactive, ionoactive, etc) components. A step beyond preorganization consists in the design of systems undergoing self-organization, i.e. systems capable of spontaneously generating well-defined supramolecular architectures by self-assembly from their components. Self-organization processes, directed by the molecular information stored in the components and read out at the supramolecular level through specific interactions, represent the operation of programmed chemical systems. They have been implemented for the generation of a variety of discrete functional architectures of either organic or inorganic nature. Self-organization processes also give access to advanced supramolecular materials, such as
ERIC Educational Resources Information Center
Losee, Robert M.
1996-01-01
The grammars of natural languages may be learned by using genetic algorithm systems such as LUST (Linguistics Using Sexual Techniques) that reproduce and mutate grammatical rules and parts-of-speech tags. In document retrieval or filtering systems, applying tags to the list of terms representing a document provides additional information about…
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023
Enhancing a diffusion algorithm for 4D image segmentation using local information
NASA Astrophysics Data System (ADS)
Lösel, Philipp; Heuveline, Vincent
2016-03-01
Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.
Wu, Guohua; Pedrycz, Witold; Li, Haifeng; Qiu, Dishan; Ma, Manhao; Liu, Jin
2013-01-01
Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS. PMID:24250256
Comparison of CPU and GPU based coding on low-complexity algorithms for display signals
NASA Astrophysics Data System (ADS)
Richter, Thomas; Simon, Sven
2013-09-01
Graphics Processing Units (GPUs) are freely programmable massively parallel general purpose processing units and thus offer the opportunity to off-load heavy computations from the CPU to the GPU. One application for GPU programming is image compression, where the massively parallel nature of GPUs promises high speed benefits. This article analyzes the predicaments of data-parallel image coding on the example of two high-throughput coding algorithms. The codecs discussed here were designed to answer a call from the Video Electronics Standards Association (VESA), and require only minimal buffering at encoder and decoder side while avoiding any pixel-based feedback loops limiting the operating frequency of hardware implementations. Comparing CPU and GPU implementations of the codes show that GPU based codes are usually not considerably faster, or perform only with less than ideal rate-distortion performance. Analyzing the details of this result provides theoretical evidence that for any coding engine either parts of the entropy coding and bit-stream build-up must remain serial, or rate-distortion penalties must be paid when offloading all computations on the GPU.
Pedrycz, Witold; Qiu, Dishan; Ma, Manhao; Liu, Jin
2013-01-01
Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS. PMID:24250256
Huang, Xin; Huang, Lin; Peng, Hong; Guru, Ashu; Xue, Weihua; Hong, Sang Yong; Liu, Miao; Sharma, Seema; Fu, Kai; Caprez, Adam P; Swanson, David R; Zhang, Zhixin; Ding, Shi-Jian
2013-09-01
Identifying protein post-translational modifications (PTMs) from tandem mass spectrometry data of complex proteome mixtures is a highly challenging task. Here we present a new strategy, named iterative search for identifying PTMs (ISPTM), for tackling this challenge. The ISPTM approach consists of a basic search with no variable modification, followed by iterative searches of many PTMs using a small number of them (usually two) in each search. The performance of the ISPTM approach was evaluated on mixtures of 70 synthetic peptides with known modifications, on an 18-protein standard mixture with unknown modifications and on real, complex biological samples of mouse nuclear matrix proteins with unknown modifications. ISPTM revealed that many chemical PTMs were introduced by urea and iodoacetamide during sample preparation and many biological PTMs, including dimethylation of arginine and lysine, were significantly activated by Adriamycin treatment in nuclear matrix associated proteins. ISPTM increased the MS/MS spectral identification rate substantially, displayed significantly better sensitivity for systematic PTM identification compared with that of the conventional all-in-one search approach, and offered PTM identification results that were complementary to InsPecT and MODa, both of which are established PTM identification algorithms. In summary, ISPTM is a new and powerful tool for unbiased identification of many different PTMs with high confidence from complex proteome mixtures.
Huang, Xin; Huang, Lin; Peng, Hong; Guru, Ashu; Xue, Weihua; Hong, Sang Yong; Liu, Miao; Sharma, Seema; Fu, Kai; Caprez, Adam; Swanson, David; Zhang, Zhixin; Ding, Shi-Jian
2013-01-01
Identifying protein post-translational modifications (PTMs) from tandem mass spectrometry data of complex proteome mixtures is a highly challenging task. Here we present a new strategy, named iterative search for identifying PTMs (ISPTM), for tackling this challenge. The ISPTM approach consists of a basic search with no variable modification, followed by iterative searches of many PTMs using a small number of them (usually two) in each search. The performance of the ISPTM approach was evaluated on mixtures of 70 synthetic peptides with known modifications, on an 18-protein standard mixture with unknown modifications and on real, complex biological samples of mouse nuclear matrix proteins with unknown modifications. ISPTM revealed that many chemical PTMs were introduced by urea and iodoacetamide during sample preparation and many biological PTMs, including dimethylation of arginine and lysine, were significantly activated by Adriamycin treatment in NM associated proteins. ISPTM increased the MS/MS spectral identification rate substantially, displayed significantly better sensitivity for systematic PTM identification than the conventional all-in-one search approach and offered PTM identification results that were complementary to InsPecT and MODa, both of which are established PTM identification algorithms. In summary, ISPTM is a new and powerful tool for unbiased identification of many different PTMs with high confidence from complex proteome mixtures. PMID:23919725
Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-10-01
We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
NASA Technical Reports Server (NTRS)
Freedman, Ellis; Ryan, Robert; Pagnutti, Mary; Holekamp, Kara; Gasser, Gerald; Carver, David; Greer, Randy
2007-01-01
Spectral Dark Subtraction (SDS) provides good ground reflectance estimates across a variety of atmospheric conditions with no knowledge of those conditions. The algorithm may be sensitive to errors from stray light, calibration, and excessive haze/water vapor. SDS seems to provide better estimates than traditional algorithms using on-site atmospheric measurements much of the time.
Maximum likelihood: Extracting unbiased information from complex networks
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego; Loffredo, Maria I.
2008-07-01
The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.
The Influence of Information Acquisition on the Complex Dynamics of Market Competition
NASA Astrophysics Data System (ADS)
Guo, Zhanbing; Ma, Junhai
In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri
2016-08-22
In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data. PMID:27410762
NASA Astrophysics Data System (ADS)
D'Ambrosio, D.; Iovine, G.
2003-04-01
Cellular Automata (CA) offer a valid alternative to the classic approach, based on partial differential equation, in order to simulate complex phenomena, when these latter can be described in terms of local interactions among their constituent parts. SCIDDICA S3hex is a two-dimensional hexagonal CA model developed for simulating debris flows: it has recently been applied to several real cases of landslides occurred in Campania (Southern Italy). The release S3hex has been derived by progressively improving an initial simplified CA model, originally derived for simulating simple cases of flow-type landslides. The model requires information related to topography, thickness of erodable regolith overlying the bedrock, and location and extension of landslide sources. Performances depend on a set of global parameters which are utilised in the transition function of the model: their value affect the elementary processes of the transition function and thus the overall results. A fine calibration is therefore an essential phase, in order to evaluate the reliability of the model for successive applications to debris-flow susceptibility zonation. The complexity of both the model and the phenomena to be simulated suggested to employ an automated technique of evaluation, for the determination of the best set of global parameters. Genetic Algorithms (GA) are a powerful optimization tool inspired to natural selection. In the last decades, in spite of their intrinsic simplicity, they have largely been successfully applied on a wide number of highly complex problems. The calibration of the model could therefore be performed through such technique of optimisation, by considering several real cases of study. Owing to the large number of simulations generally needed for performing GA experiments on complex phenomena, which imply long lasting tests on sequential computational architectures, the adoption of a parallel computational environment seemed appropriate: the original source code
Spencer, W.A.; Goode, S.R.
1997-10-01
ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.
NASA Astrophysics Data System (ADS)
Ziebart, M.; Adhya, S.; Sibthorpe, A.; Edwards, S.; Cross, P.
In an era of high resolution gravity field modelling the dominant error sources in spacecraft orbit determination are non-conservative spacecraft surface forces. These forces can be difficult to characterise a priori because they require detailed modelling of: spacecraft geometry and surface properties; attitude behaviour; the spatial and temporal variations of the incident radiation and particle fluxes and the interaction of these fluxes with the surfaces. The conventional approach to these problems is to build simplified box-and-wing models of the satellites and to estimate empirically factors that account for the inevitable mis-modelling. Over the last few years the authors have developed a suite of software utilities that model analytically three of the main effects: solar radiation pressure, thermal forces and the albedo/earthshine effects. The techniques are designed specifically to deal with complex spacecraft structures, no structural simplifications are made and the method can be applied to any spacecraft. Substantial quality control measures are used during computation to both avoid and trap errors. The paper presents the broad basis of the modelling techniques for each of the effects, and gives the results of recent tests applied to GPS Block IIR satellites and the low Earth orbit satellite altimeter JASON-1.
NEW FRAMEWORKS FOR URBAN SUSTAINABILITY ASSESSMENTS: LINKING COMPLEXITY, INFORMATION AND POLICY
Urban systems emerge as distinct entities from the complex interactions among social, economic and cultural attributes, and information, energy and material stocks and flows that operate on different temporal and spatial scales. Such complexity poses a challenge to identify the c...
ERIC Educational Resources Information Center
Williamson, David J.
2011-01-01
The specific problem addressed in this study was the low success rate of information technology (IT) projects in the U.S. Due to the abstract nature and inherent complexity of software development, IT projects are among the most complex projects encountered. Most existing schools of project management theory are based on the rational systems…
Shan, Hong; Wang, Zihao; Zhang, Fa; Xiong, Yong; Yin, Chang-Cheng; Sun, Fei
2016-01-01
Single particle analysis, which can be regarded as an average of signals from thousands or even millions of particle projections, is an efficient method to study the three-dimensional structures of biological macromolecules. An intrinsic assumption in single particle analysis is that all the analyzed particles must have identical composition and conformation. Thus specimen heterogeneity in either composition or conformation has raised great challenges for high-resolution analysis. For particles with multiple conformations, inaccurate alignments and orientation parameters will yield an averaged map with diminished resolution and smeared density. Besides extensive classification approaches, here based on the assumption that the macromolecular complex is made up of multiple rigid modules whose relative orientations and positions are in slight fluctuation around equilibriums, we propose a new method called as local optimization refinement to address this conformational heterogeneity for an improved resolution. The key idea is to optimize the orientation and shift parameters of each rigid module and then reconstruct their three-dimensional structures individually. Using simulated data of 80S/70S ribosomes with relative fluctuations between the large (60S/50S) and the small (40S/30S) subunits, we tested this algorithm and found that the resolutions of both subunits are significantly improved. Our method provides a proof-of-principle solution for high-resolution single particle analysis of macromolecular complexes with dynamic conformations.
NASA Technical Reports Server (NTRS)
Hoang, TY
1994-01-01
A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).
NASA Astrophysics Data System (ADS)
Ziebart, M.; Adhya, S.; Sibthorpe, A.; Edwards, S.; Cross, P.
In an era of high resolution gravity field modelling the dominant error sources in spacecraft orbit determination are non-conservative spacecraft surface forces. These forces include: solar radiation pressure, thermal re-radiation forces, the forces due to radiation both reflected and emitted by the Earth and atmospheric drag effects. All of these forces can be difficult to characterise a priori because they require detailed modelling of the spacecraft geometry and surface properties, its attitude behaviour, the incident flux spatial and temporal variations and the interaction of these fluxes with the surface. The conventional approach to overcoming these problems is to build simplified box-and-wing models of the satellites and to estimate empirically factors that account for the inevitable mis-modelling. Over the last five years the authors have developed a suite of software utilities that model analytically the first three effects in the list above: solar radiation pressure, thermal forces and the albedo/earthshine force. The techniques are designed specifically to deal with complex spacecraft structures, no structural simplifications are made and the method can be applied to any spacecraft. Substantial quality control measures are used during computation to both avoid and trap errors. The paper presents the broad basis of the modelling techniques for each of the effects. Two operational tests of the output models, using the medium earth orbit satellite GPS Block IIR and the low earth orbit Jason-1, are presented. Model tests for GPS IIR are based on predicting the satellite orbit using the dynamic models alone (with no empirical scaling or augmentation) and comparing the integrated trajectory with precise, post-processed orbits. Using one month's worth of precise orbits, and all available Block IIR satellites, the RMS difference between the predicted orbits and the precise orbits over 12 hours are: 0.14m (height), 0.07m across track and 0.51m (along track). The
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-01
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan
2016-01-01
Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic
Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan
2016-01-01
Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic
Using measures of information content and complexity of time series as hydrologic metrics
Technology Transfer Automated Retrieval System (TEKTRAN)
The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...
NASA Astrophysics Data System (ADS)
Gao, Min; Huang, Shutao; Zhong, Xia
2010-11-01
The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.
NASA Astrophysics Data System (ADS)
Gao, Min; Huang, Shutao; Zhong, Xia
2009-09-01
The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and
Rossi, E L
1996-01-01
The current information revolution in molecular biology has important implications for an new understanding of the phenomenology of mind, memory and behavior as a complex, self-organizing field of information transduction. This paper traces the pathways of information transduction in life processes from the molecular-genetic level to the dynamics of mind and behavior together with suggestions for future research exploring the psychobiology of mind-body communication and its implications for the psychotherapeutic arts of the future.
NASA Astrophysics Data System (ADS)
Naser, Mohamed A.; Pekar, Julius; Patterson, Michael S.
2011-02-01
An algorithm to solve the diffuse optical tomography (DOT) problem is described which uses the anatomical information from x-ray CT images. These provide a priori information about the distribution of the optical properties hence reducing the number of variables and permitting a unique solution to the ill-posed problem. The light fluence rate at the boundary is written as a Taylor series expansion around an initial guess corresponding to an optically homogenous object. The second order approximation is considered and the derivatives are calculated by direct methods. These are used in an iterative algorithm to reconstruct the tissue optical properties. The reconstructed optical properties are then used for bioluminescence tomography where a minimization problem is formed based on the L1 norm objective function which uses normalized values for the light fluence rates and the corresponding Green's functions. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum. This provides efficient BLT reconstruction algorithms without the need for a priori information about the bioluminescence sources.
Balance between Noise and Information Flow Maximizes Set Complexity of Network Dynamics
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Non-Algorithmic Access to Calendar Information in a Calendar Calculator with Autism
ERIC Educational Resources Information Center
Mottron, L.; Lemmens, K.; Gagnon, L.; Seron, X.
2006-01-01
The possible use of a calendar algorithm was assessed in DBC, an autistic "savant" of normal measured intelligence. Testing of all the dates in a year revealed a random distribution of errors. Re-testing DBC on the same dates one year later shows that his errors were not stable across time. Finally, DBC was able to answer "reversed" questions that…
Norris, Rebecca L; Bailey, Rachel L; Bolls, Paul D; Wise, Kevin R
2012-01-01
This experiment explored how the emotional tone and visual complexity of direct-to-consumer (DTC) drug advertisements affect the encoding and storage of specific risk and benefit statements about each of the drugs in question. Results are interpreted under the limited capacity model of motivated mediated message processing framework. Findings suggest that DTC drug ads should be pleasantly toned and high in visual complexity in order to maximize encoding and storage of risk and benefit information. PMID:21707406
ERIC Educational Resources Information Center
Booker, Queen Esther
2009-01-01
An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…
Quantifying information transfer and mediation along causal pathways in complex systems
NASA Astrophysics Data System (ADS)
Runge, Jakob
2015-12-01
Measures of information transfer have become a popular approach to analyze interactions in complex systems such as the Earth or the human brain from measured time series. Recent work has focused on causal definitions of information transfer aimed at decompositions of predictive information about a target variable, while excluding effects of common drivers and indirect influences. While common drivers clearly constitute a spurious causality, the aim of the present article is to develop measures quantifying different notions of the strength of information transfer along indirect causal paths, based on first reconstructing the multivariate causal network. Another class of novel measures quantifies to what extent different intermediate processes on causal paths contribute to an interaction mechanism to determine pathways of causal information transfer. The proposed framework complements predictive decomposition schemes by focusing more on the interaction mechanism between multiple processes. A rigorous mathematical framework allows for a clear information-theoretic interpretation that can also be related to the underlying dynamics as proven for certain classes of processes. Generally, however, estimates of information transfer remain hard to interpret for nonlinearly intertwined complex systems. But if experiments or mathematical models are not available, then measuring pathways of information transfer within the causal dependency structure allows at least for an abstraction of the dynamics. The measures are illustrated on a climatological example to disentangle pathways of atmospheric flow over Europe.
ERIC Educational Resources Information Center
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.
1997-01-01
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
ERIC Educational Resources Information Center
Blanchard, William; And Others
Seattle University recently decided to replace three separate, computerized student-information systems with a single, integrated system. The complexity of this decision was managed with a multicriteria method that was used to evaluate alternative systems. The method took into account the many and sometimes conflicting concerns of the people who…
Linguistic Complexity and Information Structure in Korean: Evidence from Eye-Tracking during Reading
ERIC Educational Resources Information Center
Lee, Yoonhyoung; Lee, Hanjung; Gordon, Peter C.
2007-01-01
The nature of the memory processes that support language comprehension and the manner in which information packaging influences online sentence processing were investigated in three experiments that used eye-tracking during reading to measure the ease of understanding complex sentences in Korean. All three experiments examined reading of embedded…
The Readability and Complexity of District-Provided School-Choice Information
ERIC Educational Resources Information Center
Stein, Marc L.; Nagro, Sarah
2015-01-01
Public school choice has become a common feature in American school districts. Any potential benefits that could be derived from these policies depend heavily on the ability of parents and students to make informed and educated decisions about their school options. We examined the readability and complexity of school-choice guides across a sample…
ERIC Educational Resources Information Center
Tomasino, Arthur P.
2013-01-01
In spite of the best efforts of researchers and practitioners, Information Systems (IS) developers are having problems "getting it right". IS developments are challenged by the emergence of unanticipated IS characteristics undermining managers ability to predict and manage IS change. Because IS are complex, development formulas, best…
ERIC Educational Resources Information Center
Williams, Diane L.; Minshew, Nancy J.; Goldstein, Gerald
2015-01-01
More than 20?years ago, Minshew and colleagues proposed the Complex Information Processing model of autism in which the impairment is characterized as a generalized deficit involving multiple modalities and cognitive domains that depend on distributed cortical systems responsible for higher order abilities. Subsequent behavioral work revealed a…
Tang, Mengxing; Wang, Wei; Wheeler, James; McCormick, Malcolm; Dong, Xiuzhen
2002-06-01
In electrical impedance tomography, currents are applied to the body through electrodes that are attached to the surface and the corresponding surface voltages are measured. Based on these boundary measurements, the internal admittivity distribution of the body can be reconstructed. In order to improve the image quality it is necessary and useful to apply physiologically meaningful prior information into the image reconstruction. Such prior information usually can be obtained from other sources. For example, information on the object's boundary shape and internal structure can be obtained from computed tomography and magnetic resonance imaging scan. However, this type of prior information may change from time to time and from person to person. As these changes are limited anatomically and physiologically, the prior information including the possible changes can be presented in a number of variational forms. The aim of this paper is to find which form of prior information is more compatible for a specific imaged object at the time of imaging. This paper proposes a new method for selecting the most appropriate form of prior information, through the procedure of iterative image reconstruction by using the information obtained from boundary measurements. The method is based on the principle that incompatible prior information causes errors which are able to affect the image reconstruction's convergence behavior. In this method, according to the various forms of prior information available, several image reconstruction configurations are designed. Then, through monitoring the convergence behavior in an iterative image reconstruction, the configuration with compatible prior information can be found among those different configurations. As an example, the prior information regarding the imaged object's boundary shape and internal structure was studied by computer simulation. Results were shown and discussed.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
Local structure information by EXAFS analysis using two algorithms for Fourier transform calculation
NASA Astrophysics Data System (ADS)
Aldea, N.; Pintea, S.; Rednic, V.; Matei, F.; Tiandou, Hu; Yaning, Xie
2009-08-01
The present work is a comparison study between different algorithms of Fourier transform for obtaining very accurate local structure results using Extended X-ray Absorption Fine Structure technique. In this paper we focus on the local structural characteristics of supported nickel catalysts and Fe3O4 core-shell nanocomposites. The radial distribution function could be efficiently calculated by the fast Fourier transform when the coordination shells are well separated while the Filon quadrature gave remarkable results for close-shell coordination.
2013-01-01
Background Adequate health literacy is important for people to maintain good health and manage diseases and injuries. Educational text, either retrieved from the Internet or provided by a doctor’s office, is a popular method to communicate health-related information. Unfortunately, it is difficult to write text that is easy to understand, and existing approaches, mostly the application of readability formulas, have not convincingly been shown to reduce the difficulty of text. Objective To develop an evidence-based writer support tool to improve perceived and actual text difficulty. To this end, we are developing and testing algorithms that automatically identify difficult sections in text and provide appropriate, easier alternatives; algorithms that effectively reduce text difficulty will be included in the support tool. This work describes the user evaluation with an independent writer of an automated simplification algorithm using term familiarity. Methods Term familiarity indicates how easy words are for readers and is estimated using term frequencies in the Google Web Corpus. Unfamiliar words are algorithmically identified and tagged for potential replacement. Easier alternatives consisting of synonyms, hypernyms, definitions, and semantic types are extracted from WordNet, the Unified Medical Language System (UMLS), and Wiktionary and ranked for a writer to choose from to simplify the text. We conducted a controlled user study with a representative writer who used our simplification algorithm to simplify texts. We tested the impact with representative consumers. The key independent variable of our study is lexical simplification, and we measured its effect on both perceived and actual text difficulty. Participants were recruited from Amazon’s Mechanical Turk website. Perceived difficulty was measured with 1 metric, a 5-point Likert scale. Actual difficulty was measured with 3 metrics: 5 multiple-choice questions alongside each text to measure understanding
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742
NASA Astrophysics Data System (ADS)
Jones, A. S.; Fletcher, S. J.; Kidder, S. Q.; Forsythe, J. M.
2012-12-01
The CSU/NOAA Data Processing and Error Analysis System (DPEAS) was created to merge, or blend, multiple satellite and model data sets within a single consistent framework. DPEAS is designed to be used at both research and operational facilities to facilitate Research-to-Operations technology transfers. The system supports massive parallelization via grid computing technologies, and hosts data fusion techniques for transference to 24/7 operations in a low cost computational environment. In this work, we highlight the data assimilation and data fusion methodologies of the DPEAS framework that facilitates new and complex multi-satellite non-Gaussian data assimilation algorithm developments. DPEAS is in current operational use at NOAA/NESDIS Office of Satellite and Product Operations (OSPO) and performs multi-product data fusion of global "blended" Total Precipitable Water (bTPW) and blended Rainfall Rate (bRR). In this work we highlight: 1) the current dynamic inter-satellite calibration processing performed within the DPEAS data fusion and error analysis, 2) as well as our DPEAS development plans for future blended products (AMSR-2 and Megha-Tropiques), and 3) layered TPW products using the NASA AIRS data for National Weather Service forecaster use via the NASA SPoRT facility at Huntsville, AL. We also discuss new system additions for cloud verification and prediction activities in collaboration with the National Center for Atmospheric Research (NCAR), and planned use with the USAF Air Force Weather Agency's (AFWA) global Cloud Depiction and Forecast System (CDFS) facilities. Scientifically, we focus on the data fusion of atmospheric and land surface product information, including global cloud and water vapor data sets, soil moisture data, and specialized land surface products. The data fusion methods include the use of 1DVAR data assimilation for satellite sounding data sets, and numerous real-time statistical analysis methods. Our new development activities to
Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn
2005-10-01
Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism. PMID:15958508
A novel seizure detection algorithm informed by hidden Markov model event states
NASA Astrophysics Data System (ADS)
Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian
2016-06-01
Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h‑1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.
A novel seizure detection algorithm informed by hidden Markov model event states
NASA Astrophysics Data System (ADS)
Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian
2016-06-01
Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.
Molecular dynamics of protein kinase-inhibitor complexes: a valid structural information.
Caballero, Julio; Alzate-Morales, Jans H
2012-01-01
Protein kinases (PKs) are key components of protein phosphorylation based signaling networks in eukaryotic cells. They have been identified as being implicated in many diseases. High-resolution X-ray crystallographic data exist for many PKs and, in many cases, these structures are co-complexed with inhibitors. Although this valuable information confirms the precise structure of PKs and their complexes, it ignores the dynamic movements of the structures which are relevant to explain the affinities and selectivity of the ligands, to characterize the thermodynamics of the solvated complexes, and to derive predictive models. Atomistic molecular dynamics (MD) simulations present a convenient way to study PK-inhibitor complexes and have been increasingly used in recent years in structure-based drug design. MD is a very useful computational method and a great counterpart for experimentalists, which helps them to derive important additional molecular information. That enables them to follow and understand structure and dynamics of protein-ligand systems with extreme molecular detail on scales where motion of individual atoms can be tracked. MD can be used to sample dynamic molecular processes, and can be complemented with more advanced computational methods (e.g., free energy calculations, structure-activity relationship analysis). This review focuses on the most commonly applications to study PK-inhibitor complexes using MD simulations. Our aim is that researchers working in the design of PK inhibitors be aware of the benefits of this powerful tool in the design of potent and selective PK inhibitors. PMID:22571663
Social stereotypes and information-processing strategies: the impact of task complexity.
Bodenhausen, G V; Lichtenstein, M
1987-05-01
Subjects read information about a defendant in a criminal trial with initial instructions to judge either his guilt (guilt judgment objective) or his aggressiveness (trait judgment objective). The defendant was either Hispanic or ethnically nondescript. After considering the evidence, subjects made both guilt and aggressiveness judgments (regardless of which type of judgment they were instructed to make at the time they read the information) and then recalled as much of the information they read as they could. Results favored the hypothesis that when subjects face a complex judgmental situation, they use stereotypes (when available and relevant) as a way of simplifying the judgment. Specifically, they use the stereotype as a central theme around which they organize presented evidence that is consistent with it, and they neglect inconsistent information. Subjects with a (complex) guilt judgment objective judged the defendant to be relatively more guilty and aggressive and recalled more negative information about him if he was Hispanic than if he was ethnically nondescript. In contrast, subjects with a (simple) trait judgment objective did not perceive either the guilt or aggressiveness of the two defendants to be appreciably different, and did not display any significant bias in their recall of the evidence. These and other results are discussed in terms of the information-processing strategies subjects are likely to use when they expect to make different types of judgments.
NASA Astrophysics Data System (ADS)
Luo, Xi-Liu; Wang, Jiang; Han, Chun-Xiao; Deng, Bin; Wei, Xi-Le; Bian, Hong-Rui
2012-02-01
As a convenient approach to the characterization of cerebral cortex electrical information, electroencephalograph (EEG) has potential clinical application in monitoring the acupuncture effects. In this paper, a method composed of the mutual information method and Lempel—Ziv complexity method (MILZC) is proposed to investigate the effects of acupuncture on the complexity of information exchanges between different brain regions based on EEGs. In the experiments, eight subjects are manually acupunctured at ‘Zusanli’ acupuncture point (ST-36) with different frequencies (i.e., 50, 100, 150, and 200 times/min) and the EEGs are recorded simultaneously. First, MILZC values are compared in general. Then average brain connections are used to quantify the effectiveness of acupuncture under the above four frequencies. Finally, significance index P values are used to study the spatiality of the acupuncture effect on the brain. Three main findings are obtained: (i) MILZC values increase during the acupuncture; (ii) manual acupunctures (MAs) with 100 times/min and 150 times/min are more effective than with 50 times/min and 200 times/min; (iii) contralateral hemisphere activation is more prominent than ipsilateral hemisphere's. All these findings suggest that acupuncture contributes to the increase of brain information exchange complexity and the MILZC method can successfully describe these changes.
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms. PMID:27390649
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
The $13 million expansion to KSC's Visitor Complex includes a new International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. Other additions are the new information center, a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
The $13 million expansion to KSC's Visitor Complex includes a new International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. Other additions are a new information center, a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
NASA Astrophysics Data System (ADS)
Javaheri Javid, Mohammad Ali; Blackwell, Tim; Zimmer, Robert; Majid al-Rifaie, Mohammad
2016-04-01
Shannon entropy fails to discriminate structurally different patterns in two-dimensional images. We have adapted information gain measure and Kolmogorov complexity to overcome the shortcomings of entropy as a measure of image structure. The measures are customised to robustly quantify the complexity of images resulting from multi-state cellular automata (CA). Experiments with a two-dimensional multi-state cellular automaton demonstrate that these measures are able to predict some of the structural characteristics, symmetry and orientation of CA generated patterns.
Computer/information security design approaches for Complex 21/Reconfiguration facilities
Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.
1993-08-01
Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.
NASA Astrophysics Data System (ADS)
Crochet, Philippe
2009-10-01
SummaryThe objective of this paper is to present a radar-based quantitative precipitation estimation algorithm and assess its quality over the complex terrain of western Iceland. The proposed scheme deals with the treatment of beam blockage, anomalous propagation, vertical profile of reflectivity and includes a radar adjustment technique compensating for range, orographic effects and variations in the Z-R relationship. The quality of the estimated precipitation is remarkably enhanced after post-processing and in reasonably good agreement with what is known about the spatial distribution of precipitation in the studied area from both rain gauge observations and a gridded dataset derived from an orographic precipitation model. The results suggest that this methodology offers a credible solution to obtain an estimate of the distribution of precipitation in mountainous terrain and appears to be of practical value to meteorologists and hydrologists.
Pareschi, Fabio; Albertini, Pierluigi; Frattini, Giovanni; Mangia, Mauro; Rovatti, Riccardo; Setti, Gianluca
2016-02-01
We report the design and implementation of an Analog-to-Information Converter (AIC) based on Compressed Sensing (CS). The system is realized in a CMOS 180 nm technology and targets the acquisition of bio-signals with Nyquist frequency up to 100 kHz. To maximize performance and reduce hardware complexity, we co-design hardware together with acquisition and reconstruction algorithms. The resulting AIC outperforms previously proposed solutions mainly thanks to two key features. First, we adopt a novel method to deal with saturations in the computation of CS measurements. This allows no loss in performance even when 60% of measurements saturate. Second, the system is able to adapt itself to the energy distribution of the input by exploiting the so-called rakeness to maximize the amount of information contained in the measurements. With this approach, the 16 measurement channels integrated into a single device are expected to allow the acquisition and the correct reconstruction of most biomedical signals. As a case study, measurements on real electrocardiograms (ECGs) and electromyograms (EMGs) show signals that these can be reconstructed without any noticeable degradation with a compression rate, respectively, of 8 and 10.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Sedig, Kamran; Parsons, Paul; Dittmer, Mark; Ola, Oluwakemi
2012-01-01
Public health professionals work with a variety of information sources to carry out their everyday activities. In recent years, interactive computational tools have become deeply embedded in such activities. Unlike the early days of computational tool use, the potential of tools nowadays is not limited to simply providing access to information; rather, they can act as powerful mediators of human-information discourse, enabling rich interaction with public health information. If public health informatics tools are designed and used properly, they can facilitate, enhance, and support the performance of complex cognitive activities that are essential to public health informatics, such as problem solving, forecasting, sense-making, and planning. However, the effective design and evaluation of public health informatics tools requires an understanding of the cognitive and perceptual issues pertaining to how humans work and think with information to perform such activities. This paper draws on research that has examined some of the relevant issues, including interaction design, complex cognition, and visual representations, to offer some human-centered design and evaluation considerations for public health informatics tools. PMID:23569645
Sedig, Kamran; Parsons, Paul; Dittmer, Mark; Ola, Oluwakemi
2012-01-01
Public health professionals work with a variety of information sources to carry out their everyday activities. In recent years, interactive computational tools have become deeply embedded in such activities. Unlike the early days of computational tool use, the potential of tools nowadays is not limited to simply providing access to information; rather, they can act as powerful mediators of human-information discourse, enabling rich interaction with public health information. If public health informatics tools are designed and used properly, they can facilitate, enhance, and support the performance of complex cognitive activities that are essential to public health informatics, such as problem solving, forecasting, sense-making, and planning. However, the effective design and evaluation of public health informatics tools requires an understanding of the cognitive and perceptual issues pertaining to how humans work and think with information to perform such activities. This paper draws on research that has examined some of the relevant issues, including interaction design, complex cognition, and visual representations, to offer some human-centered design and evaluation considerations for public health informatics tools. PMID:23569645
Spatial and Social Diffusion of Information and Influence: Models and Algorithms
ERIC Educational Resources Information Center
Doo, Myungcheol
2012-01-01
In this dissertation research, we argue that spatial alarms and activity-based social networks are two fundamentally new types of information and influence diffusion channels. Such new channels have the potential of enriching our professional experiences and our personal life quality in many unprecedented ways. First, we develop an activity driven…
On Using Genetic Algorithms for Multimodal Relevance Optimization in Information Retrieval.
ERIC Educational Resources Information Center
Boughanem, M.; Christment, C.; Tamine, L.
2002-01-01
Presents a genetic relevance optimization process performed in an information retrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other new attractions are an information center, a walk- through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
A novel Dual Probe Complex Trial Protocol for detection of concealed information.
Labkovsky, Elena; Rosenfeld, J Peter
2014-11-01
In simply guilty (SG), countermeasure-using guilty (CM), and innocent (IN) subjects, a new concealed information test, the P300-based Dual Probe Complex Trial Protocol was tested in a mock crime scenario. It combines an oddball protocol with two stimuli (probe, irrelevant) and another with three stimuli (probe, irrelevant, target) into one trial, doubling detected mock crime information per unit time, compared to previous protocols. Probe-irrelevant amplitude differences were significant in SG and CM, but not IN subjects. On a measure from both two and three stimulus protocol parts of the Dual Probe Complex Trial Protocol trial, accuracy was 94.7% (based on a .9 bootstrap criterion). The criterion-independent area (AUC) under the receiver operating characteristic (from signal detection theory) measuring SG and CM versus IN discriminability averaged .92 (in a range of 0.5-1.0). Countermeasures enhanced irrelevant (not probe) P300s in CM groups. PMID:24981064
Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems
Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu
2009-01-01
In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.
Musical beauty and information compression: Complex to the ear but simple to the mind?
2011-01-01
Background The biological origin of music, its universal appeal across human cultures and the cause of its beauty remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is not? Possible answers to these questions will be framed in the context of Information Theory. Presentation of the Hypothesis The entire life-long sensory data stream of a human is enormous. The adaptive solution to this problem of scale is information compression, thought to have evolved to better handle, interpret and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some point in evolution, the practice of successful information compression became linked to the physiological reward system. I hypothesise that the establishment of this "compression and pleasure" connection paved the way for musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio compression had become intrinsically pleasurable in its own right. Testing the Hypothesis For a range of compositions, empirically determine the relationship between the listener's pleasure and "lossless" audio compression. I hypothesise that enduring musical masterpieces will possess an interesting objective property: despite apparent complexity, they will also exhibit high compressibility. Implications of the Hypothesis Artistic masterpieces and deep Scientific insights share the common process of data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical geniuses are skilled in composing music that appears highly complex to
NASA Astrophysics Data System (ADS)
Wang, Guangwei; Araki, Kenji
In this paper, we propose an improved SO-PMI (Semantic Orientation Using Pointwise Mutual Information) algorithm, for use in Japanese Weblog Opinion Mining. SO-PMI is an unsupervised approach proposed by Turney that has been shown to work well for English. When this algorithm was translated into Japanese naively, most phrases, whether positive or negative in meaning, received a negative SO. For dealing with this slanting phenomenon, we propose three improvements: to expand the reference words to sets of words, to introduce a balancing factor and to detect neutral expressions. In our experiments, the proposed improvements obtained a well-balanced result: both positive and negative accuracy exceeded 62%, when evaluated on 1,200 opinion sentences sampled from three different domains (reviews of Electronic Products, Cars and Travels from Kakaku. com). In a comparative experiment on the same corpus, a supervised approach (SA-Demo) achieved a very similar accuracy to our method. This shows that our proposed approach effectively adapted SO-PMI for Japanese, and it also shows the generality of SO-PMI.
NASA Astrophysics Data System (ADS)
Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis
2011-03-01
Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Tutwiler, M. Shane
2014-01-01
This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…
Task complexity and sources of task-related information during the observational learning process.
Laguna, Patricia L
2008-08-01
Although research has examined the influence of various sources of task information for skill acquisition during observational learning, the results have been ambiguous. The purpose of this study was to examine sources of information in relation to the type of task. One hundred and twenty participants were randomly assigned to one of two sets of six treatment strategies: (1) all model demonstrations; (2) model demonstrations with physical practice with knowledge of performance; (3) model demonstrations with physical practice without knowledge of performance; (4) physical practice without knowledge of performance; (5) physical practice with knowledge of performance; or (6) verbal instructions only. One set learned a simple version of the task while the other set learned a more complex version. Cognitive representation and performance accuracy (spatial and temporal) were assessed. Results indicate that task type does influence the source of information to facilitate skill acquisition. The simple task benefited from model demonstrations, physical practice with knowledge of performance, or a combination of model demonstrations and practice both with and without knowledge of performance, while the complex version benefited more from a combination of model demonstrations and knowledge of performance practice. The results of this study provide an insight into the ambiguity that exists within the observational learning and motor learning literature regarding the effectiveness of information sources for motor skill acquisition.
NASA Astrophysics Data System (ADS)
Horn, Florian; Bayer, Florian; Pelzer, Georg; Rieger, Jens; Ritter, André; Weber, Thomas; Zang, Andrea; Michel, Thilo; Anton, Gisela
2014-03-01
Grating-based X-ray phase-contrast imaging is a promising imaging modality to increase soft tissue contrast in comparison to conventional attenuation-based radiography. Complementary and otherwise inaccessible information is provided by the dark-field image, which shows the sub-pixel size granularity of the measured object. This could especially turn out to be useful in mammography, where tumourous tissue is connected with the presence of supertiny microcalcifications. In addition to the well-established image reconstruction process, an analysis method was introduced by Modregger, 1 which is based on deconvolution of the underlying scattering distribution within a single pixel revealing information about the sample. Subsequently, the different contrast modalities can be calculated with the scattering distribution. The method already proved to deliver additional information in the higher moments of the scattering distribution and possibly reaches better image quality with respect to an increased contrast-to-noise ratio. Several measurements were carried out using melamine foams as phantoms. We analysed the dependency of the deconvolution-based method with respect to the dark-field image on different parameters such as dose, number of iterations of the iterative deconvolution-algorithm and dark-field signal. A disagreement was found in the reconstructed dark-field values between the FFT method and the iterative method. Usage of the resulting characteristics might be helpful in future applications.
Abstracting meaning from complex information (gist reasoning) in adult traumatic brain injury.
Vas, Asha Kuppachi; Spence, Jeffrey; Chapman, Sandra Bond
2015-01-01
Gist reasoning (abstracting meaning from complex information) was compared between adults with moderate-to-severe traumatic brain injury (TBI, n = 30) at least one year post injury and healthy adults (n = 40). The study also examined the contribution of executive functions (working memory, inhibition, and switching) and memory (immediate recall and memory for facts) to gist reasoning. The correspondence between gist reasoning and daily function was also examined in the TBI group. Results indicated that the TBI group performed significantly lower than the control group on gist reasoning, even after adjusting for executive functions and memory. Executive function composite was positively associated with gist reasoning (p < .001). Additionally, performance on gist reasoning significantly predicted daily function in the TBI group beyond the predictive ability of executive function alone (p = .011). Synthesizing and abstracting meaning(s) from information (i.e., gist reasoning) could provide an informative index into higher order cognition and daily functionality. PMID:25633568
ERIC Educational Resources Information Center
Kuhlthau, Carol Collier
1999-01-01
Investigates changes in perceptions of the information search process of an early career information professional as he becomes more experienced and proficient at his work. Building on earlier research, comparisons of user's perceptions of uncertainty, complexity, construction, and sources in information tasks were made over a five-year period.…
Kuropatkin, A I
2010-01-01
Key significance of information processes for ensuring optimal sanogenesis was shown by wavelet-analysis of skin microvascular blood flow oscillations at 64 patients with complex regional pain syndrome after sympathectomy Early reorganization of information in trophotropic direction at the level of microvascular tissue systems, its predomination and conservation all along the microvascular networks facilitate optimal realization of adaptive reactions and, as a result, are conductive to maximum treatment efficiency. In these cases complete elimination of disease and achievement of excellent treatment results were possible. Maximum treatment efficiency could not be reached without the above-mentioned information changing. On the contrary predomination and conservation of ergotropic information at the early periods after surgery were unfavourable to prediction of clinical results of sympathectomy Tissue desympathisation is not required to formation of information trophotropic purposefulness in microvascular networks; it is enough to achieve certain threshold of sympathetic activity decrease. The results of this work may be useful for investigation of physiological mechanisms of information treatment technologies (homeopathy etc.).
Sakhanenko, Nikita A; Galas, David J
2015-11-01
Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables. The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the "shadows" of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate "shadows." The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the "shadows" calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds. PMID:26335709
Sakhanenko, Nikita A; Galas, David J
2015-11-01
Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables. The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the "shadows" of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate "shadows." The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the "shadows" calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds.
Sakhanenko, Nikita A.
2015-01-01
Abstract Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables. The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the “shadows” of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate “shadows.” The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the “shadows” calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds. PMID:26335709
Aragón, Alfredo S.; Kalberg, Wendy O.; Buckley, David; Barela-Scott, Lindsey M.; Tabachnick, Barbara G.; May, Philip A.
2010-01-01
Background While a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Methods Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine motor skills. Results Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined “a priori” based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared to controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex
NASA Astrophysics Data System (ADS)
Yin, Zhendong; Zong, Zhiyuan; Sun, Hongjian; Wu, Zhilu; Yang, Zhutian
2012-12-01
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector. As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the BER performance and the near-far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
Online Community Detection for Large Complex Networks
Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian
2014-01-01
Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683
Borthwick, Kenneth M; Smelser, Diane T; Bock, Jonathan A; Elmore, James R; Ryer, Evan J; Ye, Zi; Pacheco, Jennifer A.; Carrell, David S.; Michalkiewicz, Michael; Thompson, William K; Pathak, Jyotishman; Bielinski, Suzette J; Denny, Joshua C; Linneman, James G; Peissig, Peggy L; Kho, Abel N; Gottesman, Omri; Parmar, Harpreet; Kullo, Iftikhar J; McCarty, Catherine A; Böttinger, Erwin P; Larson, Eric B; Jarvik, Gail P; Harley, John B; Bajwa, Tanvir; Franklin, David P; Carey, David J; Kuivaniemi, Helena; Tromp, Gerard
2015-01-01
Background and objective We designed an algorithm to identify abdominal aortic aneurysm cases and controls from electronic health records to be shared and executed within the “electronic Medical Records and Genomics” (eMERGE) Network. Materials and methods Structured Query Language, was used to script the algorithm utilizing “Current Procedural Terminology” and “International Classification of Diseases” codes, with demographic and encounter data to classify individuals as case, control, or excluded. The algorithm was validated using blinded manual chart review at three eMERGE Network sites and one non-eMERGE Network site. Validation comprised evaluation of an equal number of predicted cases and controls selected at random from the algorithm predictions. After validation at the three eMERGE Network sites, the remaining eMERGE Network sites performed verification only. Finally, the algorithm was implemented as a workflow in the Konstanz Information Miner, which represented the logic graphically while retaining intermediate data for inspection at each node. The algorithm was configured to be independent of specific access to data and was exportable (without data) to other sites. Results The algorithm demonstrated positive predictive values (PPV) of 92.8% (CI: 86.8-96.7) and 100% (CI: 97.0-100) for cases and controls, respectively. It performed well also outside the eMERGE Network. Implementation of the transportable executable algorithm as a Konstanz Information Miner workflow required much less effort than implementation from pseudo code, and ensured that the logic was as intended. Discussion and conclusion This ePhenotyping algorithm identifies abdominal aortic aneurysm cases and controls from the electronic health record with high case and control PPV necessary for research purposes, can be disseminated easily, and applied to high-throughput genetic and other studies. PMID:27054044
ERIC Educational Resources Information Center
Holland, V. Melissa; Rose, Andrew
Complex conditional instructions ("if X, then do Y") are prevalent in public documents, where they typically appear in prose form. Results of two previous studies have shown that conditional instructions become very difficult to process as the structure becomes more complex. A study was designed to investigate whether this difficulty can be…
'Selfish herds' of guppies follow complex movement rules, but not when information is limited.
Kimbell, Helen S; Morrell, Lesley J
2015-10-01
Under the threat of predation, animals can decrease their level of risk by moving towards other individuals to form compact groups. A significant body of theoretical work has proposed multiple movement rules, varying in complexity, which might underlie this process of aggregation. However, if and how animals use these rules to form compact groups is still not well understood, and how environmental factors affect the use of these rules even less so. Here, we evaluate the success of different movement rules, by comparing their predictions with the movement seen when shoals of guppies (Poecilia reticulata) form under the threat of predation. We repeated the experiment in a turbid environment to assess how the use of the movement rules changed when visual information is reduced. During a simulated predator attack, guppies in clear water used complex rules that took multiple neighbours into account, forming compact groups. In turbid water, the difference between all rule predictions and fish movement paths increased, particularly for complex rules, and the resulting shoals were more fragmented than in clear water. We conclude that guppies are able to use complex rules to form dense aggregations, but that environmental factors can limit their ability to do so. PMID:26400742
‘Selfish herds’ of guppies follow complex movement rules, but not when information is limited
Kimbell, Helen S.; Morrell, Lesley J.
2015-01-01
Under the threat of predation, animals can decrease their level of risk by moving towards other individuals to form compact groups. A significant body of theoretical work has proposed multiple movement rules, varying in complexity, which might underlie this process of aggregation. However, if and how animals use these rules to form compact groups is still not well understood, and how environmental factors affect the use of these rules even less so. Here, we evaluate the success of different movement rules, by comparing their predictions with the movement seen when shoals of guppies (Poecilia reticulata) form under the threat of predation. We repeated the experiment in a turbid environment to assess how the use of the movement rules changed when visual information is reduced. During a simulated predator attack, guppies in clear water used complex rules that took multiple neighbours into account, forming compact groups. In turbid water, the difference between all rule predictions and fish movement paths increased, particularly for complex rules, and the resulting shoals were more fragmented than in clear water. We conclude that guppies are able to use complex rules to form dense aggregations, but that environmental factors can limit their ability to do so. PMID:26400742
Robust synchronization of complex networks with uncertain couplings and incomplete information
NASA Astrophysics Data System (ADS)
Wang, Fan; Liang, Jinling; Wang, Zidong; Alsaadi, Fuad E.
2016-07-01
The mean square exponential (MSE) synchronization problem is investigated in this paper for complex networks with simultaneous presence of uncertain couplings and incomplete information, which comprise both the randomly occurring delay and the randomly occurring non-linearities. The network considered is uncertain with time-varying stochastic couplings. The randomly occurring delay and non-linearities are modelled by two Bernoulli-distributed white sequences with known probabilities to better describe realistic complex networks. By utilizing the coordinate transformation, the addressed complex network can be exponentially synchronized in the mean square if the MSE stability of a transformed subsystem can be assured. The stability problem is studied firstly for the transformed subsystem based on the Lyapunov functional method. Then, an easy-to-verify sufficient criterion is established by further decomposing the transformed system, which embodies the joint impacts of the single-node dynamics, the network topology and the statistical quantities of the uncertainties on the synchronization of the complex network. Numerical examples are exploited to illustrate the effectiveness of the proposed methods.
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.
2005-08-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.
Liu, Jing; Morikawa, Masa-aki; Kimizuka, Nobuo
2011-11-01
A novel amphiphilic Tb(3+) complex (TbL(+)) having anionic bis(pyridine) arms and a hydrophobic alkyl chain is developed. It spontaneously self-assembles in water and gives stable vesicles that show sensitized luminescence of Tb(3+) ions at neutral pH. This TbL(+) complex is designed to show coordinative unsaturation, i.e., water molecules occupy some of the first coordination spheres and are replaceable upon binding of phosphate ions. These features render TbL(+) self-assembling receptor molecules which show increase in the luminescence intensity upon binding of nucleotides. Upon addition of adenosine triphosphate (ATP), significant amplification of luminescent intensity was observed. On the other hand, ADP showed moderately increased luminescence and almost no enhancement was observed for AMP. Very interestingly, the increase in luminescence intensity observed for ATP and ADP showed sigmoidal dependence on the concentration of added nucleotides. It indicates positive cooperative binding of these nucleotides to TbL(+) complexes preorganized on the vesicle surface. Self-assembly of amphiphilic Tb(3+) receptor complexes provides nanointerfaces which selectively convert and amplify molecular information of high energy phosphates linked by phosphoanhydride bonds into luminescence intensity changes.
Cameron, Delroy; Sheth, Amit P.; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A.
2015-01-01
While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and “intelligible constructs” not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving
An Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves
NASA Astrophysics Data System (ADS)
Huijse, Pablo; Estevez, Pablo A.; Protopapas, Pavlos; Zegers, Pablo; Principe, José C.
2012-10-01
We propose a new information theoretic metric for finding periodicities in stellar light curves. Light curves are astronomical time series of brightness over time, and are characterized as being noisy and unevenly sampled. The proposed metric combines correntropy (generalized correlation) with a periodic kernel to measure similarity among samples separated by a given period. The new metric provides a periodogram, called Correntropy Kernelized Periodogram (CKP), whose peaks are associated with the fundamental frequencies present in the data. The CKP does not require any resampling, slotting or folding scheme as it is computed directly from the available samples. CKP is the main part of a fully-automated pipeline for periodic light curve discrimination to be used in astronomical survey databases. We show that the CKP method outperformed the slotted correntropy, and conventional methods used in astronomy for periodicity discrimination and period estimation tasks, using a set of light curves drawn from the MACHO survey. The proposed metric achieved 97.2% of true positives with 0% of false positives at the confidence level of 99% for the periodicity discrimination task; and 88% of hits with 11.6% of multiples and 0.4% of misses in the period estimation task.
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher
2005-01-01
This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, plus an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, shown here with photos of John Glenn in his historic Shuttle mission in October 1998, with an introduction to the range of activities and exhibits, plus honor the center's namesake, President John F. Kennedy. Other new additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, plus an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the Robot Scouts exhibit in the $13 million expansion to KSC's Visitor Complex, this display offers a view of how data from robotic probes might be used to build a human habitat for Mars. Visitors witness a simulated Martian sunset. Other new additions include and information center, a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, plus an International Space Station- themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other new additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, and an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
Markov and non-Markov processes in complex systems by the dynamical information entropy
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Gafarov, F. M.
1999-12-01
We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.
Huesgen, Pitter F; Alami, Meriem; Lange, Philipp F; Foster, Leonard J; Schröder, Wolfgang P; Overall, Christopher M; Green, Beverley R
2013-01-01
In organisms with complex plastids acquired by secondary endosymbiosis from a photosynthetic eukaryote, the majority of plastid proteins are nuclear-encoded, translated on cytoplasmic ribosomes, and guided across four membranes by a bipartite targeting sequence. In-depth understanding of this vital import process has been impeded by a lack of information about the transit peptide part of this sequence, which mediates transport across the inner three membranes. We determined the mature N-termini of hundreds of proteins from the model diatom Thalassiosira pseudonana, revealing extensive N-terminal modification by acetylation and proteolytic processing in both cytosol and plastid. We identified 63 mature N-termini of nucleus-encoded plastid proteins, deduced their complete transit peptide sequences, determined a consensus motif for their cleavage by the stromal processing peptidase, and found evidence for subsequent processing by a plastid methionine aminopeptidase. The cleavage motif differs from that of higher plants, but is shared with other eukaryotes with complex plastids.
Crolley, R.; Thompson, M.
2011-01-31
There has been a need for a faster and cheaper deployment model for information technology (IT) solutions to address waste management needs at US Department of Energy (DOE) complex sites for years. Budget constraints, challenges in deploying new technologies, frequent travel, and increased job demands for existing employees have prevented IT organizations from staying abreast of new technologies or deploying them quickly. Despite such challenges, IT organizations have added significant value to waste management handling through better worker safety, tracking, characterization, and disposition at DOE complex sites. Systems developed for site-specific missions have broad applicability to waste management challenges and in many cases have been expanded to meet other waste missions. Radio frequency identification (RFID) and global positioning satellite (GPS)-enabled solutions have reduced the risk of radiation exposure and safety risks. New web-based and mobile applications have enabled precision characterization and control of nuclear materials. These solutions have also improved operational efficiencies and shortened schedules, reduced cost, and improved regulatory compliance. Collaboration between US Department of Energy (DOE) complex sites is improving time to delivery and cost efficiencies for waste management missions with new information technologies (IT) such as wireless computing, global positioning satellite (GPS), and radio frequency identification (RFID). Integrated solutions developed at separate DOE complex sites by new technology Centers of Excellence (CoE) have increased material control and accountability, worker safety, and environmental sustainability. CoEs offer other DOE sister sites significant cost and time savings by leveraging their technology expertise in project scoping, implementation, and ongoing operations.
Bharkhada, Deepak; Yu, Hengyong; Ge, Shuping; Carr, J Jeffrey; Wang, Ge
2009-01-01
High x-ray radiation dose is a major public concern with the increasing use of multidetector computed tomography (CT) for diagnosis of cardiovascular diseases. This issue must be effectively addressed by dose-reduction techniques. Recently, our group proved that an internal region of interest (ROI) can be exactly reconstructed solely from localized projections if a small subregion within the ROI is known. In this article, we propose to use attenuation values of the blood in aorta and vertebral bone to serve as the known information for localized cardiac CT. First, we describe a novel interior tomography approach that backprojects differential fan-beam or parallel-beam projections to obtain the Hilbert transform and then reconstructs the original image in an ROI using the iterative projection onto convex sets algorithm. Then, we develop a numerical phantom based on clinical cardiac CT images for simulations. Our results demonstrate that it is feasible to use practical prior information and exactly reconstruct cardiovascular structures only from projection data along x-ray paths through the ROI.
Dowding, Dawn W; Cheyne, Helen L; Hundley, Vanora
2011-10-01
Randomised controlled trials are the 'gold standard' for evaluating the effectiveness of interventions in health-care settings. However, in midwifery care, many interventions are 'complex', comprising a number of different elements which may have an effect on the impact of the intervention in health-care settings. In this paper we reflect on our experience of designing and evaluating a complex intervention (a decision tool to assist with the diagnosis of labour in midwifery care), examining some of the issues that our study raises for future research in complex interventions.
1990-12-31
The Complex Terrain Dispersion Model Plus (CTDMPLUS) is a refined air quality model for use in all stability conditions for complex terrain applications. It contains the technology of the original Complex Terrain Dispersion Model (CTDM) for stable and neutral conditions, but also models daytime, unstable conditions. The model makes use of considerable detail in the terrain and meteorological data (as compared to current EPA regulatory models) and requires the parameterization of individual terrain features, thus considering the three-dimensional nature of the interaction of the plume and terrain.
NASA Astrophysics Data System (ADS)
Steinhaeuser, K.; Chawla, N. V.; Ganguly, A. R.
2010-12-01
Recent articles have posited that the skills of climate model projections, particularly for variables and scales of interest to decision makers, may need to be significantly improved. Here we hypothesize that there is information content in variables that are projected more reliably, for example, sea surface temperatures, which is relevant for improving predictions of other variables at scales which may be more crucial, for example, regional land temperature and precipitation anomalies. While this hypothesis may be partially supported based on conceptual understanding, a key question to explore is whether the relevant information content can be meaningfully extracted from observations and model simulations. Here we use climate reconstructions from reanalysis datasets to examine the question in detail. Our tool of choice is complex networks, which have provided useful insights in the context of descriptive analysis and change detection for climate in the recent literature. We describe a new adaptation of complex networks based on computational approaches which provide additional descriptive insights at both global and regional scales, specifically sea surface variables, and provide a unified framework for data-guided predictive modeling, specifically for regional temperature and precipitation over land. Complex networks were constructed from historical data to study the properties of the global climate system and characterize behavior at the global scale. Clusters based on community detection, which leverage the network distance, were used to identify regional structures. Persistence and stability of these features over time were evaluated. Predictive information content of ocean indicators with respect to land climate was extracted using a suite of regression models and validated on held-out data. Our results suggest that the new adaptation of complex networks may be well-suited to provide a unified framework for exploring climate teleconnections or long
Enhanced Community Structure Detection in Complex Networks with Partial Background Information
Zhang, Zhong-Yuan; Sun, Kai-Di; Wang, Si-Qi
2013-01-01
Community structure detection in complex networks is important since it can help better understand the network topology and how the network works. However, there is still not a clear and widely-accepted definition of community structure, and in practice, different models may give very different results of communities, making it hard to explain the results. In this paper, different from the traditional methodologies, we design an enhanced semi-supervised learning framework for community detection, which can effectively incorporate the available prior information to guide the detection process and can make the results more explainable. By logical inference, the prior information is more fully utilized. The experiments on both the synthetic and the real-world networks confirm the effectiveness of the framework. PMID:24247657
NASA Astrophysics Data System (ADS)
Greisch, Jean Francois; Harding, Michael E.; Chmela, Jiri; Klopper, Willem M.; Schooss, Detlef; Kappes, Manfred M.
2016-06-01
The application of lanthanoid complexes ranges from photovoltaics and light-emitting diodes to quantum memories and biological assays. Rationalization of their design requires a thorough understanding of intramolecular processes such as energy transfer, charge transfer, and non-radiative decay involving their subunits. Characterization of the excited states of such complexes considerably benefits from mass spectrometric methods since the associated optical transitions and processes are strongly affected by stoichiometry, symmetry, and overall charge state. We report herein spectroscopic measurements on ensembles of ions trapped in the gas phase and soft-landed in neon matrices. Their interpretation is considerably facilitated by direct comparison with computations. The combination of energy- and time-resolved measurements on isolated species with density functional as well as ligand-field and Franck-Condon computations enables us to infer structural as well as dynamical information about the species studied. The approach is first illustrated for sets of model lanthanoid complexes whose structure and electronic properties are systematically varied via the substitution of one component (lanthanoid or alkali,alkali-earth ion): (i) systematic dependence of ligand-centered phosphorescence on the lanthanoid(III) promotion energy and its impact on sensitization, and (ii) structural changes induced by the substitution of alkali or alkali-earth ions in relation with structures inferred using ion mobility spectroscopy. The temperature dependence of sensitization is briefly discussed. The focus is then shifted to measurements involving europium complexes with doxycycline an antibiotic of the tetracycline family. Besides discussing the complexes' structural and electronic features, we report on their use to monitor enzymatic processes involving hydrogen peroxide or biologically relevant molecules such as adenosine triphosphate (ATP).
NASA Astrophysics Data System (ADS)
Kasthurirathna, Dharshana; Piraveenan, Mahendra; Harré, Michael
2014-01-01
In this paper, we study the influence of the topological structure of social systems on the evolution of coordination in them. We simulate a coordination game ("Stag-hunt") on four well-known classes of complex networks commonly used to model social systems, namely scale-free, small-world, random and hierarchical-modular, as well as on the well-mixed model. Our particular focus is on understanding the impact of information diffusion on coordination, and how this impact varies according to the topology of the social system. We demonstrate that while time-lags and noise in the information about relative payoffs affect the emergence of coordination in all social systems, some topologies are markedly more resilient than others to these effects. We also show that, while non-coordination may be a better strategy in a society where people do not have information about the payoffs of others, coordination will quickly emerge as the better strategy when people get this information about others, even with noise and time lags. Societies with the so-called small-world structure are most conducive to the emergence of coordination, despite limitations in information propagation, while societies with scale-free topologies are most sensitive to noise and time-lags in information diffusion. Surprisingly, in all topologies, it is not the highest connected people (hubs), but the slightly less connected people (provincial hubs) who first adopt coordination. Our findings confirm that the evolution of coordination in social systems depends heavily on the underlying social network structure.
Konovalova, Anna; Mitchell, Angela M; Silhavy, Thomas J
2016-01-01
Lipoprotein RcsF is the OM component of the Rcs envelope stress response. RcsF exists in complexes with β-barrel proteins (OMPs) allowing it to adopt a transmembrane orientation with a lipidated N-terminal domain on the cell surface and a periplasmic C-terminal domain. Here we report that mutations that remove BamE or alter a residue in the RcsF trans-lumen domain specifically prevent assembly of the interlocked complexes without inactivating either RcsF or the OMP. Using these mutations we demonstrate that these RcsF/OMP complexes are required for sensing OM outer leaflet stress. Using mutations that alter the positively charged surface-exposed domain, we show that RcsF monitors lateral interactions between lipopolysaccharide (LPS) molecules. When these interactions are disrupted by cationic antimicrobial peptides, or by the loss of negatively charged phosphate groups on the LPS molecule, this information is transduced to the RcsF C-terminal signaling domain located in the periplasm to activate the stress response. DOI: http://dx.doi.org/10.7554/eLife.15276.001 PMID:27282389
NASA Astrophysics Data System (ADS)
Yang, Jie; Yang, Ran
2007-11-01
The authors introduce unsupervised wishart classification technique for fully polarimetric SAR data using H/α decomposition of POLSAR images. This paper we applied this technique to AIRSAR data of Flevoland, Netherlands. The most valuable in this paper is our evaluation. From the following tree aspects we evaluate the algorithm mentioned in this paper and the results it produced. (i) By calculating the Jeffries-Matusit Distance (J-M Distance) J mn between two classes, which represents the separation between classes, the property of this classifier is measured. J-M Distance is a measurement of average difference between Probability Distribution Function (PDF) of two classes. Usually J-M Distance is between 0 and 2, and the bigger J-M Distance represents that two classes has a good separation. This paper we have most J-M Distances 1.8-2.0, thus indicates good separation; (ii) According to the average entropy and alpha of each final class, the classification results are analyzed; (iii) by comparing the classification results with the ground truth, the classification algorithm is evaluated. The results have a good simulation of ground truth. Experiment in this paper, according to the measurement criterion, analysis and evaluation, demonstrates that the region of Flevoland is well classification and the method has the advantage of edge holding that in the case of non-smooth borders this advantage is helpful. Also this paper gives a better repeat time.
Goldfarb, Dennis; Hast, Bridgid E; Wang, Wei; Major, Michael B
2014-12-01
Protein-protein interactions defined by affinity purification and mass spectrometry (APMS) suffer from high false discovery rates. Consequently, lists of potential interactions must be pruned of contaminants before network construction and interpretation, historically an expensive, time-intensive, and error-prone task. In recent years, numerous computational methods were developed to identify genuine interactions from the hundreds of candidates. Here, comparative analysis of three popular algorithms, HGSCore, CompPASS, and SAINT, revealed complementarity in their classification accuracies, which is supported by their divergent scoring strategies. We improved each algorithm by an average area under a receiver operating characteristics curve increase of 16% by integrating a variety of indirect data known to correlate with established protein-protein interactions, including mRNA coexpression, gene ontologies, domain-domain binding affinities, and homologous protein interactions. Each APMS scoring approach was incorporated into a separate logistic regression model along with the indirect features; the resulting three classifiers demonstrate improved performance on five diverse APMS data sets. To facilitate APMS data scoring within the scientific community, we created Spotlite, a user-friendly and fast web application. Within Spotlite, data can be scored with the augmented classifiers, annotated, and visualized ( http://cancer.unc.edu/majorlab/software.php ). The utility of the Spotlite platform to reveal physical, functional, and disease-relevant characteristics within APMS data is established through a focused analysis of the KEAP1 E3 ubiquitin ligase.
Power-law ansatz in complex systems: Excessive loss of information.
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring. PMID:26764792
Power-law ansatz in complex systems: Excessive loss of information.
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Power-law ansatz in complex systems: Excessive loss of information
NASA Astrophysics Data System (ADS)
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.
2015-01-01
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X
2015-12-26
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
NASA Technical Reports Server (NTRS)
Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.
2011-01-01
Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.
NASA Astrophysics Data System (ADS)
Hou, W.; Wang, J.; Xu, X.; Leitch, J. W.; Delker, T.; Chen, G.
2015-12-01
This paper includes a series of studies that aim to develop a hyperspectral remote sensing technique for retrieving aerosol properties from a newly developed instrument GEO-TASO (Geostationary Trance gas and Aerosol Sensor Optimization) that measures the radiation at 0.4-0.7 wavelengths at spectral resolution of 0.02 nm. GEOS-TASO instrument is a prototype instrument of TEMPO (Tropospheric Emissions: Monitoring of Pollution), which will be launched in 2022 to measure aerosols, O3, and other trace gases from a geostationary orbit over the N-America. The theoretical framework of optimized inversion algorithm and the information content analysis such as degree of freedom for signal (DFS) will be discussed for hyperspectral remote sensing in visible bands, as well as the application to GEO-TASO, which has mounted on the NASA HU-25C aircraft and gathered several days' of airborne hyperspectral data for our studies. Based on the optimization theory and different from the traditional lookup table (LUT) retrieval technique, our inversion method intends to retrieve the aerosol parameters and surface reflectance simultaneously, in which UNL-VRTM (UNified Linearized Radiative Transfer Model) is employed for forward model and Jacobians calculation, meanwhile, principal component analysis (PCA) is used to constrain the hyperspectral surface reflectance.The information content analysis provides the theoretical analysis guidance about what kind of aerosol parameters could be retrieved from GeoTASO hyperspectral remote sensing to the practical inversion study. Besides, the inversion conducted iteratively until the modeled spectral radiance fits with GeoTASO measurements by a Quasi-Newton method called L-BFGS-B (Large scale BFGS Bound constrained). Finally, the retrieval results of aerosol optical depth and other aerosol parameters are compared against those retrieved by AEROENT and/or in situ measurements such as DISCOVER-AQ during the aircraft campaign.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, Efi; Ebtehaj, Mohammad
2016-04-01
The increasing availability of precipitation observations from the Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for accurate estimation of precipitation extremes especially over ungauged mountainous terrains and coastal regions to improve hydro-geological hazard prediction and control. Our recent research has shown that treating precipitation retrieval and data fusion/assimilation as inverse problems and using a regularized variational approach with the regularization term(s) selected to impose desired constraints on the solution, leads to improved representation of extremes. Here we present some new theoretical and computational developments which extend the ideas to a framework of retrieval via a regularized search within properly constructed data bases. We test the framework in several tropical storms over the Ganges-Brahmaputra delta region and over the Himalayas and compare the results with the standard retrieval algorithms currently used for operational purposes.
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
Examining age differences in performance of a complex information search and retrieval task.
Czaja, S J; Sharit, J; Ownby, R; Roth, D L; Nair, S
2001-12-01
This study examined age differences in performance of a complex information search and retrieval task by using a simulated real-world task typical of those performed by customer service representatives. The study also investigated the influence of task experience and the relationships between cognitive abilities and task performance. One hundred seventeen participants from 3 age groups, younger (20-39 years). middle-aged (40-59 years), and older (60-75 years), performed the task for 3 days. Significant age differences were found for all measures of task performance with the exception of navigational efficiency and number of problems correctly navigated per attempt. There were also effects of task experience. The findings also indicated significant direct and indirect relations between component cognitive abilities and task performance.
Examining age differences in performance of a complex information search and retrieval task.
Czaja, S J; Sharit, J; Ownby, R; Roth, D L; Nair, S
2001-12-01
This study examined age differences in performance of a complex information search and retrieval task by using a simulated real-world task typical of those performed by customer service representatives. The study also investigated the influence of task experience and the relationships between cognitive abilities and task performance. One hundred seventeen participants from 3 age groups, younger (20-39 years). middle-aged (40-59 years), and older (60-75 years), performed the task for 3 days. Significant age differences were found for all measures of task performance with the exception of navigational efficiency and number of problems correctly navigated per attempt. There were also effects of task experience. The findings also indicated significant direct and indirect relations between component cognitive abilities and task performance. PMID:11766912
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-01-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
NASA Astrophysics Data System (ADS)
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging.
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-01-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Zavala-Yoé, Ricardo; Ramírez-Mendoza, Ricardo; Cordero, Luz M
2015-01-01
Epilepsy demands a major burden at global levels. Worldwide, about 1% of people suffer epilepsy and 30% of them (0.3%) are anticonvulsants resistant. Among them, some children epilepsies are peculiarly difficult to deal with as Doose syndrome (DS). Doose syndrome is a very complicated type of children cryptogenic refractory epilepsy (CCRE) which is traditionally studied by analysis of complex electrencephalograms (EEG) by neurologists. CCRE are affections which evolve in a course of many years and customarily, questions such as on which year was the kid healthiest (less seizures) and on which region of the brain (channel) the affection has been progressing more negatively are very difficult or even impossible to answer as a result of the quantity of EEG recorded through the patient's life. These questions can now be answered by the application of entropies to massive information contained in many EEG. CCRE can not always be cured and have not been investigated from a mathematical viewpoint as far as we are concerned. In this work, a set of 80 time series (distributed equally in four yearly recorded EEG) is studied in order to support pediatrician neurologists to understand better the evolution of this syndrome in the long term. Our contribution is to support multichannel long term analysis of CCRE by observing simple entropy plots instead of studying long rolls of traditional EEG graphs. A comparative analysis among aproximate entropy, sample entropy, our versions of multiscale entropy (MSE) and composite multiscale entropy revealed that our refined MSE was the most convenient complexity measure to describe DS. Additionally, a new entropy parameter is proposed and is referred to as bivariate MSE (BMSE). Such BMSE will provide graphical information in much longer term than MSE.
Synchronization, TIGoRS, and Information Flow in Complex Systems: Dispositional Cellular Automata.
Sulis, William H
2016-04-01
Synchronization has a long history in physics where it refers to the phase matching of two identical oscillators. This notion has been extensively studied in physics as well as in biology, where it has been applied to such widely varying phenomena as the flashing of fireflies and firing of neurons in the brain. Human behavior, however, may be recurrent but it is not oscillatory even though many physiological systems do exhibit oscillatory tendencies. Moreover, much of human behaviour is collaborative and cooperative, where the individual behaviours may be distinct yet contemporaneous (if not simultaneous) and taken collectively express some functionality. In the context of behaviour, the important aspect is the repeated co-occurrence in time of behaviours that facilitate the propagation of information or of functionality, regardless of whether or not these behaviours are similar or identical. An example of this weaker notion of synchronization is transient induced global response synchronization (TIGoRS). Previous work has shown that TIGoRS is a ubiquitous phenomenon among complex systems, enabling them to stably parse environmental transients into salient units to which they stably respond. This leads to the notion of Sulis machines, which emergently generate a primitive linguistic structure through their dynamics. This article reviews the notion of TIGoRS and its expression in several complex systems models including tempered neural networks, driven cellular automata and cocktail party automata. The emergent linguistics of Sulis machines are discussed. A new class of complex systems model, the dispositional cellular automaton is introduced. A new metric for TIGoRS, the excess synchronization, is introduced and applied to the study of TIGoRS in dispositional cellular automata. It is shown that these automata exhibit a nonlinear synchronization response to certain perturbing transients. PMID:27033136
Synchronization, TIGoRS, and Information Flow in Complex Systems: Dispositional Cellular Automata.
Sulis, William H
2016-04-01
Synchronization has a long history in physics where it refers to the phase matching of two identical oscillators. This notion has been extensively studied in physics as well as in biology, where it has been applied to such widely varying phenomena as the flashing of fireflies and firing of neurons in the brain. Human behavior, however, may be recurrent but it is not oscillatory even though many physiological systems do exhibit oscillatory tendencies. Moreover, much of human behaviour is collaborative and cooperative, where the individual behaviours may be distinct yet contemporaneous (if not simultaneous) and taken collectively express some functionality. In the context of behaviour, the important aspect is the repeated co-occurrence in time of behaviours that facilitate the propagation of information or of functionality, regardless of whether or not these behaviours are similar or identical. An example of this weaker notion of synchronization is transient induced global response synchronization (TIGoRS). Previous work has shown that TIGoRS is a ubiquitous phenomenon among complex systems, enabling them to stably parse environmental transients into salient units to which they stably respond. This leads to the notion of Sulis machines, which emergently generate a primitive linguistic structure through their dynamics. This article reviews the notion of TIGoRS and its expression in several complex systems models including tempered neural networks, driven cellular automata and cocktail party automata. The emergent linguistics of Sulis machines are discussed. A new class of complex systems model, the dispositional cellular automaton is introduced. A new metric for TIGoRS, the excess synchronization, is introduced and applied to the study of TIGoRS in dispositional cellular automata. It is shown that these automata exhibit a nonlinear synchronization response to certain perturbing transients.
NASA Astrophysics Data System (ADS)
Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong
2015-07-01
The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.
Kumar, Anup; Pathak, Akhilendra K; Guria, Chandan
2015-10-01
A culture medium based on NPK-10:26:26 fertilizer was formulated for enhanced biomass and lipid production of Dunaliella tertiolecta by selecting appropriate nutrients and environmental parameters. Five-level-five-factor central composite design assisted response surface methodology was adopted for optimal cultivation of D. tertiolecta and results were compared with simple genetic algorithm (GA). Significant improvement in biomass and lipid production was obtained using newly formulated fertilizer medium over f/2 medium. Following optimal parameters [i.e., NaHCO3, (mM), NPK-10:26:26 (g L(-1)), NaCl (M), light intensity (μmol m(-2) s(-1)) and temperature (°C)] were obtained for maximum biomass (1.98 g L(-1)) and lipid production (0.76 g L(-1)): (42.50, 0.33, 1.09, 125, 25.13) and (38.44, 0.40, 1.25, 125, 24.5), respectively using GA. A multi-objective optimization was solved using non-dominated sorting GA to find best operating variables to maximize biomass and lipid production simultaneously. Effects of operating parameters and their interactions on algae and lipid productivity were successfully revealed. PMID:26188554
McDonough, Ian M.; Nashiro, Kaoru
2014-01-01
An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130
Zimmerman, K; Levitis, D; Addicott, E; Pringle, A
2016-02-01
We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets.
Joye, Yannick; Steg, Linda; Ünal, Ayça Berfu; Pals, Roos
2016-01-01
Across 3 studies, we investigated whether visual complexity deriving from internally repeating visual information over many scale levels is a source of perceptual fluency. Such continuous repetition of visual information is formalized in fractal geometry and is a key-property of natural structures. In the first 2 studies, we exposed participants to 3-dimensional high-fractal versus low-fractal stimuli, respectively characterized by a relatively high versus low degree of internal repetition of visual information. Participants evaluated high-fractal stimuli as more complex and fascinating than their low-fractal counterparts. We assessed ease of processing by asking participants to solve effortful puzzles during and after exposure to high-fractal versus low-fractal stimuli. Across both studies, we found that puzzles presented during and after seeing high-fractal stimuli were perceived as the easiest ones to solve and were solved more accurately and faster than puzzles associated with the low-fractal stimuli. In Study 3, we ran the Dot Probe Procedure to rule out that the findings from Study 1 and Study 2 reflected differences in attentional bias between the high-fractal and low-fractal stimuli, rather than perceptual fluency. Overall, our findings confirm that complexity deriving from internal repetition of visual information can be easy on the mind. (PsycINFO Database Record PMID:26322692
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Tsanas, Athanasios; Zañartu, Matías; Little, Max A; Fox, Cynthia; Ramig, Lorraine O; Clifford, Gari D
2014-05-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F(0)) of speech signals. This study examines ten F(0) estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F(0) in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F(0) estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F(0) estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F(0) estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F(0) estimation is required. PMID:24815269
Updated treatment algorithm of pulmonary arterial hypertension.
Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne
2013-12-24
The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643
Transformation of polarized light information in the central complex of the locust.
Heinze, Stanley; Gotthardt, Sascha; Homberg, Uwe
2009-09-23
Many insects perceive the E-vector orientation of polarized skylight and use it for compass navigation. In locusts, polarized light is detected by photoreceptors of the dorsal rim area of the eye. Polarized light signals from both eyes are integrated in the central complex (CC), a group of neuropils in the center of the brain. Thirteen types of CC neuron are sensitive to dorsally presented, polarized light (POL-neurons). These neurons interconnect the subdivisions of the CC, particularly the protocerebral bridge (PB), the upper and lower divisions of the central body (CBU, CBL), and the adjacent lateral accessory lobes (LALs). All POL-neurons show polarization-opponency, i.e., receive excitatory and inhibitory input at orthogonal E-vector orientations. To provide physiological evidence for the direction of information flow through the polarization vision network in the CC, we analyzed the functional properties of the different cell types through intracellular recordings. Tangential neurons of the CBL showed highest signal-to-noise ratio, received either ipsilateral polarized-light input only or, together with CL1 columnar neurons, had eccentric receptive fields. Bilateral polarized-light inputs with zenith-centered receptive fields were found in tangential neurons of the PB and in columnar neurons projecting to the LALs. Together with other physiological parameters, these data suggest a flow of information from the CBL (input) to the PB and from here to the LALs (output). This scheme is supported by anatomical data and suggests transformation of purely sensory E-vector coding at the CC input stage to position-invariant coding of 360 degrees -compass directions at the output stage.
Kubař, Tomáš; Elstner, Marcus
2013-04-28
In this work, a fragment-orbital density functional theory-based method is combined with two different non-adiabatic schemes for the propagation of the electronic degrees of freedom. This allows us to perform unbiased simulations of electron transfer processes in complex media, and the computational scheme is applied to the transfer of a hole in solvated DNA. It turns out that the mean-field approach, where the wave function of the hole is driven into a superposition of adiabatic states, leads to over-delocalization of the hole charge. This problem is avoided using a surface hopping scheme, resulting in a smaller rate of hole transfer. The method is highly efficient due to the on-the-fly computation of the coarse-grained DFT Hamiltonian for the nucleobases, which is coupled to the environment using a QM/MM approach. The computational efficiency and partial parallel character of the methodology make it possible to simulate electron transfer in systems of relevant biochemical size on a nanosecond time scale. Since standard non-polarizable force fields are applied in the molecular-mechanics part of the calculation, a simple scaling scheme was introduced into the electrostatic potential in order to simulate the effect of electronic polarization. It is shown that electronic polarization has an important effect on the features of charge transfer. The methodology is applied to two kinds of DNA sequences, illustrating the features of transfer along a flat energy landscape as well as over an energy barrier. The performance and relative merit of the mean-field scheme and the surface hopping for this application are discussed. PMID:23493847
ERIC Educational Resources Information Center
Law, David J.; And Others
Information coordination tasks are tasks that require concurrent performance of two or more component tasks and the subsequent coordination of component information. In this experiment, different procedures, componential and contextual, were used to train separate groups of college campus community members (n=35 and n=33) in a coordination task…
ERIC Educational Resources Information Center
Hiebert, Elfrieda H.
2011-01-01
The "Common Core State Standards/English Language Arts" include the ability to become increasingly more capable with complex text over the school career, redirecting attention to the measurement of text complexity. While suggesting multiple criteria, the Standards offer a single measure of text complexity--Lexiles. Additional…
Effective normalization of complexity measurements for epoch length and sampling frequency.
Rapp, P E; Cellucci, C J; Korslund, K E; Watanabe, T A; Jiménez-Montaño, M A
2001-07-01
The algorithmic complexity of a symbol sequence is sensitive to the length of the message. Additionally, in those cases where the sequence is constructed by the symbolic reduction of an experimentally observed wave form, the corresponding value of algorithmic complexity is also sensitive to the sampling frequency. In this contribution, we present definitions of algorithmic redundancy that are sequence-sensitive generalizations of Shannon's original definition of information redundancy. In contrast with algorithmic complexity, we demonstrate that algorithmic redundancy is not sensitive to message length or to observation scale (sampling frequency) when stationary systems are examined.
NASA Astrophysics Data System (ADS)
Poltera, Yann; Martucci, Giovanni; Hervo, Maxime; Haefele, Alexander; Emmenegger, Lukas; Brunner, Dominik; Henne, stephan
2016-04-01
We have developed, applied and validated a novel algorithm called PathfinderTURB for the automatic and real-time detection of the vertical structure of the planetary boundary layer. The algorithm has been applied to a year of data measured by the automatic LIDAR CHM15K at two sites in Switzerland: the rural site of Payerne (MeteoSwiss station, 491 m, asl), and the alpine site of Kleine Scheidegg (KSE, 2061 m, asl). PathfinderTURB is a gradient-based layer detection algorithm, which in addition makes use of the atmospheric variability to detect the turbulent transition zone that separates two low-turbulence regions, one characterized by homogeneous mixing (convective layer) and one above characterized by free tropospheric conditions. The PathfinderTURB retrieval of the vertical structure of the Local (5-10 km, horizontal scale) Convective Boundary Layer (LCBL) has been validated at Payerne using two established reference methods. The first reference consists of four independent human-expert manual detections of the LCBL height over the year 2014. The second reference consists of the values of LCBL height calculated using the bulk Richardson number method based on co-located radio sounding data for the same year 2014. Based on the excellent agreement with the two reference methods at Payerne, we decided to apply PathfinderTURB to the complex-terrain conditions at KSE during 2014. The LCBL height retrievals are obtained by tilting the CHM15K at an angle of 19 degrees with respect to the horizontal and aiming directly at the Sphinx Observatory (3580 m, asl) on the Jungfraujoch. This setup of the CHM15K and the processing of the data done by the PathfinderTURB allows to disentangle the long-transport from the local origin of gases and particles measured by the in-situ instrumentation at the Sphinx Observatory. The KSE measurements showed that the relation amongst the LCBL height, the aerosol layers above the LCBL top and the gas + particle concentration is all but
Global Complexity: Information, Chaos, and Control at ASIS 1996 Annual Meeting.
ERIC Educational Resources Information Center
Jacob, M. E. L.
1996-01-01
Discusses proceedings of the 1996 ASIS (American Society for Information Science) annual meeting in Baltimore (Maryland), including chaos theory; electronic universities; distance education; intellectual property, including information privacy on the Internet; the need for leadership in libraries and information centers; information warfare and…
ERIC Educational Resources Information Center
Doskey, Steven Craig
2014-01-01
This research presents an innovative means of gauging Systems Engineering effectiveness through a Systems Engineering Relative Effectiveness Index (SE REI) model. The SE REI model uses a Bayesian Belief Network to map causal relationships in government acquisitions of Complex Information Systems (CIS), enabling practitioners to identify and…
ERIC Educational Resources Information Center
Flood, Bernadette; Henman, Martin C.
2015-01-01
People with intellectual disabilities may be "invisible" to pharmacists. They are a complex group of patients many of whom have diabetes. Pharmacists may have little experience of the challenges faced by this high risk group of patients who may be prescribed high risk medications. This case report details information supplied by Pat, a…
Phillips, Andrew B.; Merrill, Jacqueline
2012-01-01
Many complex markets such as banking and manufacturing have benefited significantly from technology adoption. Each of these complex markets experienced increased efficiency, quality, security, and customer involvement as a result of technology transformation in their industry. Healthcare has not benefited to the same extent. We provide initial findings from a policy analysis of complex markets and the features of these transformations that can influence health technology adoption and acceptance. PMID:24199112
Phillips, Andrew B; Merrill, Jacqueline
2012-01-01
Many complex markets such as banking and manufacturing have benefited significantly from technology adoption. Each of these complex markets experienced increased efficiency, quality, security, and customer involvement as a result of technology transformation in their industry. Healthcare has not benefited to the same extent. We provide initial findings from a policy analysis of complex markets and the features of these transformations that can influence health technology adoption and acceptance. PMID:24199112
NASA Astrophysics Data System (ADS)
Soriano, Miguel C.; Zunino, Luciano; Rosso, Osvaldo A.; Mirasso, Claudio R.
2010-04-01
The time evolution of the output of a semiconductor laser subject to optical feedback can exhibit high-dimensional chaotic fluctuations. In this contribution, our aim is to quantify the complexity of the chaotic time-trace generated by a semiconductor laser subject to delayed optical feedback. To that end, we discuss the properties of two recently introduced complexity measures based on information theory, namely the permutation entropy (PE) and the statistical complexity measure (SCM). The PE and SCM are defined as a functional of a symbolic probability distribution, evaluated using the Bandt-Pompe recipe to assign a probability distribution function to the time series generated by the chaotic system. In order to evaluate the performance of these novel complexity quantifiers, we compare them to a more standard chaos quantifier, namely the Kolmogorov-Sinai entropy. Here, we present numerical results showing that the statistical complexity and the permutation entropy, evaluated at the different time-scales involved in the chaotic regime of the laser subject to optical feedback, give valuable information about the complexity of the laser dynamics.
ERIC Educational Resources Information Center
Hiebert, Elfrieda H.
2011-01-01
A focus of the Common Core State Standards/English Language Arts (CCSS/ELA) is that students become increasingly more capable with complex text over their school careers. This focus has redirected attention to the measurement of text complexity. Although CCSS/ELA suggests multiple criteria for this task, the standards offer a single measure of…
ERIC Educational Resources Information Center
Gourd, William
Confined to the interaction of complexity/simplicity of the stimulus play, this paper both focuses on the differing patterns of response between cognitively complex and cognitively simple persons to the characters in "The Homecoming" and "Private Lives" and attempts to determine the responses to specific characters or groups of characters. The…
ERIC Educational Resources Information Center
Kuhn, John R., Jr.
2009-01-01
Drawing upon the theories of complexity and complex adaptive systems and the Singerian Inquiring System from C. West Churchman's seminal work "The Design of Inquiring Systems" the dissertation herein develops a systems design theory for continuous auditing systems. The dissertation consists of discussion of the two foundational theories,…
ERIC Educational Resources Information Center
Hughes, Hilary
2013-01-01
This paper reports the findings of a qualitative study that investigated 25 international students' use of online information resources for study purposes at two Australian universities. Using an expanded critical incident approach, the study viewed international students through an information literacy lens, as information-using learners.…