Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
Algorithmic complexity of a protein
NASA Astrophysics Data System (ADS)
Dewey, T. Gregory
1996-07-01
The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Cluster algorithms and computational complexity
NASA Astrophysics Data System (ADS)
Li, Xuenan
Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.
Communication complexity and information complexity
NASA Astrophysics Data System (ADS)
Pankratov, Denis
Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information
Algorithmic complexity of thermal ratchet motion
NASA Astrophysics Data System (ADS)
Sanchez, J. R.; Family, F.; Arizmendi, C. M.
1998-12-01
Thermal ratchets are Brownian models where time-correlated fluctuations coming from a nonequilibrium bath interacting with a spatial asymmetry are sufficient conditions to give rise to transport. The nonequilibrium bath acts as a source of negentropy (physical information). In order to quantitate the transfer of information that occurs in thermal ratchet motion, the Kolmogorov information entropy or algorithmic complexity is investigated. The complexity is measured in terms of the average number of bits per time unit necessary to specify the sequence generated by the system.
Luo, Jiawei; Wu, Juan
2015-01-01
Essential proteins provide valuable information for the development of biology and medical research from the system level. The accuracy of topological centrality only based methods is deeply affected by noise in the network. Therefore, exploring efficient methods for identifying essential proteins would be of great value. Using biological features to identify essential proteins is efficient in reducing the noise in PPI network. In this paper, based on the consideration that essential proteins evolve slowly and play a central role within a network, a new algorithm, named CED, is proposed. CED mainly employs gene expression level, protein complex information and edge clustering coefficient to predict essential proteins. The performance of CED is validated based on the yeast Protein-Protein Interaction (PPI) network obtained from DIP database and BioGRID database. The prediction accuracy of CED outperforms other seven algorithms when applied to the two databases. PMID:26510286
A novel complex valued cuckoo search algorithm.
Zhou, Yongquan; Zheng, Hongqing
2013-01-01
To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified. PMID:23766699
A Novel Complex Valued Cuckoo Search Algorithm
Zhou, Yongquan; Zheng, Hongqing
2013-01-01
To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified. PMID:23766699
Systolic systems: algorithms and complexity
Chang, J.H.
1986-01-01
This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.
Predicting complex mineral structures using genetic algorithms.
Mohn, Chris E; Kob, Walter
2015-10-28
We show that symmetry-adapted genetic algorithms are capable of finding the ground state of a range of complex crystalline phases including layered- and incommensurate super-structures. This opens the way for the atomistic prediction of complex crystal structures of functional materials and mineral phases. PMID:26441052
Pinning impulsive control algorithms for complex network
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-15
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Unifying Complexity and Information
NASA Astrophysics Data System (ADS)
Ke, Da-Guan
2013-04-01
Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Information Complexity and Biology
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Bignone, Franco A.; Cecconi, Fabio; Politi, Antonio
Kolmogorov contributed directly to Biology in essentially three problems: the analysis of population dynamics (Lotka-Volterra equations), the reaction-diffusion formulation of gene spreading (FKPP equation), and some discussions about Mendel's laws. However, the widely recognized importance of his contribution arises from his work on algorithmic complexity. In fact, the limited direct intervention in Biology reflects the generally slow growth of interest of mathematicians towards biological issues. From the early work of Vito Volterra on species competition, to the slow growth of dynamical systems theory, contributions to the study of matter and the physiology of the nervous system, the first 50-60 years have witnessed important contributions, but as scattered pieces apparently uncorrelated, and in branches often far away from Biology. Up to the 40' it is hard to see the initial loose build up of a convergence, for those theories that will become mainstream research by the end of the century, and connected by the study of biological systems per-se.
Information communication on complex networks
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Kawamoto, Hiroki; Maruyama, Takahiro; Morioka, Atsushi; Naganuma, Yuki
2013-02-01
Since communication networks such as the Internet, which is regarded as a complex network, have recently become a huge scale and a lot of data pass through them, the improvement of packet routing strategies for transport is one of the most significant themes in the study of computer networks. It is especially important to find routing strategies which can bear as many traffic as possible without congestion in complex networks. First, using neural networks, we introduce a strategy for packet routing on complex networks, where path lengths and queue lengths in nodes are taken into account within a framework of statistical physics. Secondly, instead of using shortest paths, we propose efficient paths which avoid hubs, nodes with a great many degrees, on scale-free networks with a weight of each node. We improve the heuristic algorithm proposed by Danila et. al. which optimizes step by step routing properties on congestion by using the information of betweenness, the probability of paths passing through a node in all optimal paths which are defined according to a rule, and mitigates the congestion. We confirm the new heuristic algorithm which balances traffic on networks by achieving minimization of the maximum betweenness in much smaller number of iteration steps. Finally, We model virus spreading and data transfer on peer-to-peer (P2P) networks. Using mean-field approximation, we obtain an analytical formulation and emulate virus spreading on the network and compare the results with those of simulation. Moreover, we investigate the mitigation of information traffic congestion in the P2P networks.
Advanced Algorithms for Local Routing Strategy on Complex Networks.
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
Query-answering algorithms for information agents
Levy, A.Y.; Rajaraman, A.; Ordille, J.J.
1996-12-31
We describe the architecture and query-answering algorithms used in the Information Manifold, an implemented information gathering system that provides uniform access to structured information sources on the World-Wide Web. Our architecture provides an expressive language for describing information sources, which makes it easy to add new sources and to model the fine-grained distinctions between their contents. The query-answering algorithm guarantees that the descriptions of the sources are exploited to access only sources that are relevant to a given query. Accessing only relevant sources is crucial to scale up such a system to large numbers of sources. In addition, our algorithm can exploit run-time information to further prune information sources and to reduce the cost of query planning.
A fast DFT algorithm using complex integer transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1978-01-01
Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.
Algorithms For Segmentation Of Complex-Amplitude SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Chellappa, Ramalingam
1993-01-01
Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
Accessing complexity from genome information
NASA Astrophysics Data System (ADS)
Tenreiro Machado, J. A.
2012-06-01
This paper studies the information content of the chromosomes of 24 species. In a first phase, a scheme inspired in dynamical system state space representation is developed. For each chromosome the state space dynamical evolution is shed into a two dimensional chart. The plots are then analyzed and characterized in the perspective of fractal dimension. This information is integrated in two measures of the species' complexity addressing its average and variability. The results are in close accordance with phylogenetics pointing quantitative aspects of the species' genomic complexity.
Improved motion information-based infrared dim target tracking algorithms
NASA Astrophysics Data System (ADS)
Lei, Liu; Zhijian, Huang
2014-11-01
Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.
Entropy, complexity, and spatial information
NASA Astrophysics Data System (ADS)
Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril
2014-10-01
We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.
Entropy, complexity, and spatial information
NASA Astrophysics Data System (ADS)
Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril
2014-09-01
We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.
A Simple Quality Triangulation Algorithm for Complex Geometries
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper presents a new and simple algorithm for quality triangulation in complex geometries. The proposed algorithm is based on an initial equilateral triangle mesh covering the whole domain. The mesh nodes close to the boundary edges satisfy the so-called non-encroaching criterion: the distance ...
Evader Interdiction: Algorithms, Complexity and Collateral Damage
Johnson, Matthew P.; Gutfraind, Alexander; Ahmadizadeh, Kiyan
2013-01-01
In network interdiction problems, evaders (e.g., hostile agents or data packets) are moving through a network toward targets and we wish to choose locations for sensors in order to intercept the evaders. The evaders might follow deterministic routes or Markov chains, or they may be reactive, i.e., able to change their routes in order to avoid the sensors. The challenge in such problems is to choose sensor locations economically, balancing interdiction gains with costs, including the inconvenience sensors inflict upon innocent travelers. We study the objectives of (1) maximizing the number of evaders captured when limited by a budget on sensing cost and, (2) capturing all evaders as cheaply as possible. We give algorithms for optimal sensor placement in several classes of special graphs and hardness and approximation results for general graphs, including evaders who are deterministic, Markov chain-based, reactive and unreactive. A similar-sounding but fundamentally different problem setting was posed by Glazer and Rubinstein where both evaders and innocent travelers are reactive. We again give optimal algorithms for special cases and hardness and approximation results on general graphs. PMID:25332514
Algorithmic complexity of real financial markets
NASA Astrophysics Data System (ADS)
Mansilla, R.
2001-12-01
A new approach to the understanding of complex behavior of financial markets index using tools from thermodynamics and statistical physics is developed. Physical complexity, a quantity rooted in the Kolmogorov-Chaitin theory is applied to binary sequences built up from real time series of financial markets indexes. The study is based on NASDAQ and Mexican IPC data. Different behaviors of this quantity are shown when applied to the intervals of series placed before crashes and to intervals when no financial turbulence is observed. The connection between our results and the efficient market hypothesis is discussed.
Resampling Algorithms for Particle Filters: A Computational Complexity Perspective
NASA Astrophysics Data System (ADS)
Bolić, Miodrag; Djurić, Petar M.; Hong, Sangjin
2004-12-01
Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.
Adaptive clustering algorithm for community detection in complex networks.
Ye, Zhenqing; Hu, Songnian; Yu, Jun
2008-10-01
Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Information Horizons in Complex Networks
NASA Astrophysics Data System (ADS)
Sneppen, Kim
2005-03-01
We investigate how the structure constrain specific communication in social-, man-made and biological networks. We find that human networks of governance and collaboration are predictable on teat-a-teat level, reflecting well defined pathways, but globally inefficient (1). In contrast, the Internet tends to have better overall communication abilities, more alternative pathways, and is therefore more robust. Between these extremes are the molecular network of living organisms. Further, for most real world networks we find that communication ability is favored by topology on small distances, but disfavored at larger distances (2,3,4). We discuss the topological implications in terms of modularity and the positioning of hubs in the networks (5,6). Finally we introduce some simple models which demonstarte how communication may shape the structure of in particular man made networks (7,8). 1) K. Sneppen, A. Trusina, M. Rosvall (2004). Hide and seek on complex networks [cond-mat/0407055] 2) M. Rosvall, A. Trusina, P. Minnhagen and K. Sneppen (2004). Networks and Cities: An Information Perspective [cond-mat/0407054]. In PRL. 3) A. Trusina, M. Rosvall, K. Sneppen (2004). Information Horizons in Networks. [cond-mat/0412064] 4) M. Rosvall, P. Minnhagen, K. Sneppen (2004). Navigating Networks with Limited Information. [cond-mat/0412051] 5) S. Maslov and K. Sneppen (2002). Specificity and stability in topology of protein networks Science 296, 910-913 [cond-mat/0205380]. 6) A. Trusina, S. Maslov, P. Minnhagen, K. Sneppen Hierarchy Measures in Complex Networks. Phys. Rev. Lett. 92, 178702 [cond-mat/0308339]. 7) M. Rosvall and K. Sneppen (2003). Modeling Dynamics of Information Networks. Phys. Rev. Lett. 91, 178701 [cond-mat/0308399]. 8) B-J. Kim, A. Trusina, P. Minnhagen, K. Sneppen (2003). Self Organized Scale-Free Networks from Merging and Regeneration. nlin.AO/0403006. In European Journal of Physics.
Petri net model for analysis of concurrently processed complex algorithms
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1986-01-01
This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.
Distributed learning automata-based algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-03-01
Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimization problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.
NASA Astrophysics Data System (ADS)
Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng
2016-02-01
The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A novel hybrid color image encryption algorithm using two complex chaotic systems
NASA Astrophysics Data System (ADS)
Wang, Leyuan; Song, Hongjun; Liu, Ping
2016-02-01
Based on complex Chen and complex Lorenz systems, a novel color image encryption algorithm is proposed. The larger chaotic ranges and more complex behaviors of complex chaotic systems, which compared with real chaotic systems could additionally enhance the security and enlarge key space of color image encryption. The encryption algorithm is comprised of three step processes. In the permutation process, the pixels of plain image are scrambled via two-dimensional and one-dimensional permutation processes among RGB channels individually. In the diffusion process, the exclusive-or (XOR for short) operation is employed to conceal pixels information. Finally, the mixing RGB channels are used to achieve a multilevel encryption. The security analysis and experimental simulations demonstrate that the proposed algorithm is large enough to resist the brute-force attack and has excellent encryption performance.
Data bank homology search algorithm with linear computation complexity.
Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A
1994-06-01
A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given. PMID:7922689
Exploring a new best information algorithm for Iliad.
Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.
1991-01-01
Iliad is a diagnostic expert system for internal medicine. One important feature that Iliad offers is the ability to analyze a particular patient case and to determine the most cost-effective method for pursuing the work-up. Iliad's current "best information" algorithm has not been previously validated and compared to other potential algorithms. Therefore, this paper presents a comparison of four new algorithms to the current algorithm. The basis for this comparison was eighteen "vignette" cases derived from real patient cases from the University of Utah Medical Center. The results indicated that the current algorithm can be significantly improved. More promising algorithms are suggested for future investigation. PMID:1807677
Supergravity, complex parameters and the Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold; Heurtier, Lucien
2015-08-01
The Demiański-Janis-Newman (DJN) algorithm is an original solution generating technique. For a long time it has been limited to producing rotating solutions, restricted to the case of a metric and real scalar fields, despite the fact that Demiański extended it to include more parameters such as a NUT charge. Recently two independent prescriptions have been given for extending the algorithm to gauge fields and thus electrically charged configurations. In this paper we aim to end setting up the algorithm by providing a missing but important piece, which is how the transformation is applied to complex scalar fields. We illustrate our proposal through several examples taken from N = 2 supergravity, including the stationary BPS solutions from Behrndt et al and Sen's axion-dilaton rotating black hole. Moreover we discuss solutions that include pairs of complex parameters, such as the mass and the NUT charge, or the electric and magnetic charges, and we explain how to perform the algorithm in this context (with the example of Kerr-Newman-Taub-NUT and dyonic Kerr-Newman black holes). The final formulation of the DJN algorithm can possibly handle solutions with five of the six Plebański-Demiański parameters along with any type of bosonic fields with spin less than two (exemplified with the stationary Israel-Wilson-Perjes solutions). This provides all the necessary tools for applications to general matter-coupled gravity and to (gauged) supergravity.
Information dynamics algorithm for detecting communities in networks
NASA Astrophysics Data System (ADS)
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
2012-11-01
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding
Liu, Pengyu; Jia, Kebin
2013-01-01
A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
A Modified Tactile Brush Algorithm for Complex Touch Gestures
Ragan, Eric
2015-01-01
Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.
Tsallis information dimension of complex networks
NASA Astrophysics Data System (ADS)
Zhang, Qi; Luo, Chuanhai; Li, Meizhu; Deng, Yong; Mahadevan, Sankaran
2015-02-01
The fractal and self-similarity properties are revealed in many complex networks. The information dimension is a useful method to describe the fractal and self-similarity properties of the complex networks. In order to show the influence of different parts in the complex networks to the information dimension, we have proposed a new information dimension based on the Tsallis entropy namely the Tsallis information dimension. The proposed information dimension is changed according to the scale which is described by the nonextensivity parameter q, and it is inverse with the nonextensivity parameter q. The existing information dimension is a special case of the Tsallis information dimension when q = 1. The Tsallis information dimension is a generalized information dimension of the complex networks.
Forest Height Retrieval Algorithm Using a Complex Visibility Function Approach
NASA Astrophysics Data System (ADS)
Chu, T.; Zebker, H. A.
2011-12-01
Vegetation structure and biomass on earth's terrestrial surface are critical parameters that influences global carbon cycle, habitat, climate, and resources of economic value. Space-borne and air-borne remote sensing instruments are the most practical means of obtaining information such as tree height and biomass on a large scale. SAR (Synthetic aperture radars) especially InSAR (Interferometric SAR) has been utilized in the recent years to quantify vegetation parameters such as height and biomass. However methods used to quantify global vegetation has yet to produce accurate results. It is the goal of this study to develop a signal-processing algorithm through simulation to determine vegetation heights that would lead to accurate height and biomass retrievals. A standard SAR image represents a projection of the 3D distributed backscatter onto a 2D plane. InSAR is capable of determining topography or the height of vegetation. Vegetation height is determined from the mean scattering phase center of all scatterers within a resolution cell. InSAR is capable of generating a 3D height surface, but the distribution of scatters in height is under-determined and cannot be resolved by a single-baseline measurement. One interferogram therefore is insufficient to uniquely determine vertical characteristics of even a simple 3D forest. An aperture synthesis technique in the height or vertical dimension would enable improved resolution capability to distinguish scatterers of different location in the vertical dimension. Repeat pass observations allow us differential interferometry to populate the frequency domain from which we can use the Fourier transform relation to get to the brightness or backscatter domain. Ryle and Hewish first introduced this technique of aperture synthesis in the 1960's for large radio telescope arrays. This technique would allow us to focus the antenna beam pattern in the vertical direction and increase vertical resolving power. It enable us to
Information content of ozone retrieval algorithms
NASA Technical Reports Server (NTRS)
Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.
1989-01-01
The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
A Generative Statistical Algorithm for Automatic Detection of Complex Postures
Amit, Yali; Biron, David
2015-01-01
This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2011-12-22
We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Current Algorithms for the Diagnosis of wide QRS Complex Tachycardias
Vereckei, András
2014-01-01
The differential diagnosis of a regular, monomorphic wide QRS complex tachycardia (WCT) mechanism represents a great diagnostic dilemma commonly encountered by the practicing physician, which has important implications for acute arrhythmia management, further work-up, prognosis and chronic management as well. This comprehensive review discusses the causes and differential diagnosis of WCT, and since the ECG remains the cornerstone of WCT differential diagnosis, focuses on the application and diagnostic value of different ECG criteria and algorithms in this setting and also provides a practical clinical approach to patients with WCTs. PMID:24827795
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
Toward the quality evaluation of complex information systems
NASA Astrophysics Data System (ADS)
Todoran, Ion-George; Lecornu, Laurent; Khenchaf, Ali; Le Caillec, Jean-Marc
2014-06-01
Recent technological evolutions and developments allow gathering huge amounts of data stemmed from different types of sensors, social networks, intelligence reports, distributed databases, etc. Data quantity and heterogeneity imposed the evolution necessity of the information systems. Nowadays the information systems are based on complex information processing techniques at multiple processing stages. Unfortunately, possessing large quantities of data and being able to implement complex algorithms do not guarantee that the extracted information will be of good quality. The decision-makers need good quality information in the process of decision-making. We insist that for a decision-maker the information and the information quality, viewed as a meta-information, are of great importance. A system not proposing to its user the information quality is in danger of not being correctly used or in more dramatic cases not to be used at all. In literature, especially in organizations management and in information retrieval, can be found some information quality evaluation methodologies. But none of these do not allow the information quality evaluation in complex and changing environments. We propose a new information quality methodology capable of estimating the information quality dynamically with data changes and/or with the information system inner changes. Our methodology is able to instantaneously update the system's output quality. For capturing the information quality changes through the system, we introduce the notion of quality transfer function. It is equivalent to the signal processing transfer function but working on the quality level. The quality transfer function describes the influence of a processing module over the information quality. We also present two different views over the notion of information quality: a global one, characterizing the entire system and a local one, for each processing module.
Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.
ERIC Educational Resources Information Center
Petry, Frederick E.; And Others
1993-01-01
Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a final result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures. PMID:27411128
The guitar chord-generating algorithm based on complex network
NASA Astrophysics Data System (ADS)
Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais
2016-02-01
This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.
An Algorithm for Inferring Complex Haplotypes in a Region of Copy-Number Variation
Kato, Mamoru; Nakamura, Yusuke; Tsunoda, Tatsuhiko
2008-01-01
Recent studies have extensively examined the large-scale genetic variants in the human genome known as copy-number variations (CNVs), and the universality of CNVs in normal individuals, along with their functional importance, has been increasingly recognized. However, the absence of a method to accurately infer alleles or haplotypes within a CNV region from high-throughput experimental data hampers the finer analyses of CNV properties and applications to disease-association studies. Here we developed an algorithm to infer complex haplotypes within a CNV region by using data obtained from high-throughput experimental platforms. We applied this algorithm to experimental data and estimated the population frequencies of haplotypes that can yield information on both sequences and numbers of DNA copies. These results suggested that the analysis of such complex haplotypes is essential for accurately detecting genetic differences within a CNV region between population groups. PMID:18639202
Hidden Behavior Prediction of Complex Systems Based on Hybrid Information.
Zhou, Zhi-Jie; Hu, Chang-Hua; Zhang, Bang-Cheng; Xu, Dong-Ling; Chen, Yu-Wang
2013-04-01
It is important to predict both observable and hidden behaviors in complex engineering systems. However, compared with observable behavior, it is often difficult to establish a forecasting model for hidden behavior. The existing methods for predicting the hidden behavior cannot effectively and simultaneously use the hybrid information with uncertainties that include qualitative knowledge and quantitative data. Although belief rule base (BRB) has been employed to predict the observable behavior using the hybrid information with uncertainties, it is still not applicable to predict the hidden behavior directly. As such, in this paper, a new BRB-based model is proposed to predict the hidden behavior. In the proposed BRB-based model, the initial values of parameters are usually given by experts, thus some of them may not be accurate, which can lead to inaccurate prediction results. In order to solve the problem, a parameter estimation algorithm for training the parameters of the forecasting model is further proposed on the basis of maximum likelihood algorithm. Using the hybrid information with uncertainties, the proposed model can combine together with the parameter estimation algorithm and improve the forecasting precision in an integrated and effective manner. A case study is conducted to demonstrate the capability and potential applications of the proposed forecasting model with the parameter estimation algorithm. PMID:22907969
Improving the trust algorithm of information in semantic web
NASA Astrophysics Data System (ADS)
Wan, Zong-bao; Min, Jiang
2012-01-01
With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.
Improving the trust algorithm of information in semantic web
NASA Astrophysics Data System (ADS)
Wan, Zong-Bao; Min, Jiang
2011-12-01
With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.
Imaging for dismantlement verification: information management and analysis algorithms
Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.
2010-09-01
The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.
Local algorithm for computing complex travel time based on the complex eikonal equation
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing
2016-04-01
The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.
Imaging for dismantlement verification: information management and analysis algorithms
Robinson, Sean M.; Jarman, Kenneth D.; Pitts, W. Karl; Seifert, Allen; Misner, Alex C.; Woodring, Mitchell L.; Myjak, Mitchell J.
2012-01-11
The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.
Algorithmic complexity of growth hormone release in humans
Prank, K.; Wagner, M.; Brabant, G.
1996-12-31
Most hormones are secreted in an pulsatile rather than in a constant manner. This temporal pattern of pulsatile hormone release plays an important role in the regulation of cellular function and structure. In healthy humans growth hormone (GH) secretion is characterized by distinct pulses whereas patients bearing a GH producing tumor accompanied with excessive secretion (acromegaly) exhibit a highly irregular pattern of GH release. It has been hypothesized that this highly disorderly pattern of GH release in acromegaly arises from random events in the GH-producing tumor under decreased normal control of GH secretion. Using a context-free grammar complexity measure (algorithmic complexity) in conjunction with random surrogate data sets we demonstrate that the temporal pattern of GH release in acromegaly is not significantly different from a variety of stochastic processes. In contrast, normal subjects clearly exhibit deterministic structure in their temporal patterns of GH secretion. Our results support the hypothesis that GH release in acromegaly is due to random events in the GH-producing tumorous cells which might become independent from hypothalamic regulation. 17 refs., 1 fig., 2 tabs.
Approach to complex upper extremity injury: an algorithm.
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-02-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Recording information on protein complexes in an information management system
Savitsky, Marc; Diprose, Jonathan M.; Morris, Chris; Griffiths, Susanne L.; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S.; Blake, Richard; Stuart, David I.; Esnouf, Robert M.
2011-01-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein–protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682
Pediatric Medical Complexity Algorithm: A New Method to Stratify Children by Medical Complexity
Cawthon, Mary Lawrence; Stanford, Susan; Popalisky, Jean; Lyons, Dorothy; Woodcox, Peter; Hood, Margaret; Chen, Alex Y.; Mangione-Smith, Rita
2014-01-01
OBJECTIVES: The goal of this study was to develop an algorithm based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), codes for classifying children with chronic disease (CD) according to level of medical complexity and to assess the algorithm’s sensitivity and specificity. METHODS: A retrospective observational study was conducted among 700 children insured by Washington State Medicaid with ≥1 Seattle Children’s Hospital emergency department and/or inpatient encounter in 2010. The gold standard population included 350 children with complex chronic disease (C-CD), 100 with noncomplex chronic disease (NC-CD), and 250 without CD. An existing ICD-9-CM–based algorithm called the Chronic Disability Payment System was modified to develop a new algorithm called the Pediatric Medical Complexity Algorithm (PMCA). The sensitivity and specificity of PMCA were assessed. RESULTS: Using hospital discharge data, PMCA’s sensitivity for correctly classifying children was 84% for C-CD, 41% for NC-CD, and 96% for those without CD. Using Medicaid claims data, PMCA’s sensitivity was 89% for C-CD, 45% for NC-CD, and 80% for those without CD. Specificity was 90% to 92% in hospital discharge data and 85% to 91% in Medicaid claims data for all 3 groups. CONCLUSIONS: PMCA identified children with C-CD (who have accessed tertiary hospital care) with good sensitivity and good to excellent specificity when applied to hospital discharge or Medicaid claims data. PMCA may be useful for targeting resources such as care coordination to children with C-CD. PMID:24819580
Dynamic information routing in complex networks
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-01-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257
Dynamic information routing in complex networks.
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-01-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257
Dynamic information routing in complex networks
NASA Astrophysics Data System (ADS)
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-04-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function.
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
Exploiting Complexity Information for Brain Activation Detection.
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
Decision making algorithm for development strategy of information systems
NASA Astrophysics Data System (ADS)
Derman, Galyna Y.; Nikitenko, Olena D.; Kotyra, Andrzej; Bazarova, Madina; Kassymkhanova, Dana
2015-12-01
The paper presents algorithm of decision making for development strategy of information systems. The process of development is planned taking into account the internal and external factors of the enterprise which affect the prospects of development of both the information system and the whole enterprise. The initial state of the system must be taken into account. The total risk is the criterion for selecting the strategy. The risk is calculated using statistical and fuzzy data of system's parameters. These data are summarized by means of the function of uncertainty. The software for the realization of the algorithm of decision making on choosing the development strategy of information system is developed and created in this paper.
An Iterative Decoding Algorithm for Fusion of Multimodal Information
NASA Astrophysics Data System (ADS)
Shivappa, Shankar T.; Rao, Bhaskar D.; Trivedi, Mohan M.
2007-12-01
Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.
Presentation Media, Information Complexity, and Learning Outcomes
ERIC Educational Resources Information Center
Andres, Hayward P.; Petersen, Candice
2002-01-01
Cognitive processing limitations restrict the number of complex information items held and processed in human working memory. To overcome such limitations, a verbal working memory channel is used to construct an if-then proposition representation of facts and a visual working memory channel is used to construct a visual imagery of geometric…
Darwinian demons, evolutionary complexity, and information maximization.
Krakauer, David C
2011-09-01
Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity--the organismal contribution to the construction of selection pressures--might in principle lead to its increase, bounded by thermodynamic free energy required for error correction. PMID:21974673
Retaining local image information in gamut mapping algorithms.
Zolliker, Peter; Simon, Klaus
2007-03-01
Our topic is the potential of combining global gamut mapping with spatial methods to retain the percepted local image information in gamut mapping algorithms. The main goal is to recover the original local contrast between neighboring pixels in addition to the usual optimization of preserving lightness, saturation, and global contrast. Special emphasis is placed on avoiding artifacts introduced by the gamut mapping algorithm itself. We present an unsharp masking technique based on an edge-preserving smoothing algorithm allowing to avoid halo artifacts. The good performance of the presented approach is verified by a psycho-visual experiment using newspaper printing as a representative of a small destination gamut application. Furthermore, the improved mapping properties are documented with local mapping histograms. PMID:17357727
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Pande, S.
2009-12-01
Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure
Artificial Bee Colony Algorithm Based on Information Learning.
Gao, Wei-Feng; Huang, Ling-Ling; Liu, San-Yang; Dai, Cai
2015-12-01
Inspired by the fact that the division of labor and cooperation play extremely important roles in the human history development, this paper develops a novel artificial bee colony algorithm based on information learning (ILABC, for short). In ILABC, at each generation, the whole population is divided into several subpopulations by the clustering partition and the size of subpopulation is dynamically adjusted based on the last search experience, which results in a clear division of labor. Furthermore, the two search mechanisms are designed to facilitate the exchange of information in each subpopulation and between different subpopulations, respectively, which acts as the cooperation. Finally, the comparison results on a number of benchmark functions demonstrate that the proposed method performs competitively and effectively when compared to the selected state-of-the-art algorithms. PMID:25594992
MIRA: mutual information-based reporter algorithm for metabolic networks
Cicek, A. Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin
2014-01-01
Motivation: Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA’s results are biologically sound, empirically significant and more reliable than RA. Results: We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. Availability and
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Hyperbolic mapping of complex networks based on community information
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Li, Qingguang; Jin, Fengdong; Xiong, Wei; Wu, Yao
2016-08-01
To improve the hyperbolic mapping methods both in terms of accuracy and running time, a novel mapping method called Community and Hyperbolic Mapping (CHM) is proposed based on community information in this paper. Firstly, an index called Community Intimacy (CI) is presented to measure the adjacency relationship between the communities, based on which a community ordering algorithm is introduced. According to the proposed Community-Sector hypothesis, which supposes that most nodes of one community gather in a same sector in hyperbolic space, CHM maps the ordered communities into hyperbolic space, and then the angular coordinates of nodes are randomly initialized within the sector that they belong to. Therefore, all the network nodes are so far mapped to hyperbolic space, and then the initialized angular coordinates can be optimized by employing the information of all nodes, which can greatly improve the algorithm precision. By applying the proposed dual-layer angle sampling method in the optimization procedure, CHM reduces the time complexity to O(n2) . The experiments show that our algorithm outperforms the state-of-the-art methods.
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
modern research overarching all of the traditional scientific disciplines. The transportation networks of planes, highways and railroads; the economic networks of global finance and stock markets; the social networks of terrorism, governments, businesses and churches; the physical networks of telephones, the Internet, earthquakes and global warming and the biological networks of gene regulation, the human body, clusters of neurons and food webs, share a number of apparently universal properties as the networks become increasingly complex. Ubiquitous aspects of such complex networks are the appearance of non-stationary and non-ergodic statistical processes and inverse power-law statistical distributions. Herein we review the traditional dynamical and phase-space methods for modeling such networks as their complexity increases and focus on the limitations of these procedures in explaining complex networks. Of course we will not be able to review the entire nascent field of network science, so we limit ourselves to a review of how certain complexity barriers have been surmounted using newly applied theoretical concepts such as aging, renewal, non-ergodic statistics and the fractional calculus. One emphasis of this review is information transport between complex networks, which requires a fundamental change in perception that we express as a transition from the familiar stochastic resonance to the new concept of complexity matching.
NASA Astrophysics Data System (ADS)
Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang
2012-12-01
In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
Optical tomographic memories: algorithms for the efficient information readout
NASA Astrophysics Data System (ADS)
Pantelic, Dejan V.
1990-07-01
Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different
Informational analysis involving application of complex information system
NASA Astrophysics Data System (ADS)
Ciupak, Clébia; Vanti, Adolfo Alberto; Balloni, Antonio José; Espin, Rafael
The aim of the present research is performing an informal analysis for internal audit involving the application of complex information system based on fuzzy logic. The same has been applied in internal audit involving the integration of the accounting field into the information systems field. The technological advancements can provide improvements to the work performed by the internal audit. Thus we aim to find, in the complex information systems, priorities for the work of internal audit of a high importance Private Institution of Higher Education. The applied method is quali-quantitative, as from the definition of strategic linguistic variables it was possible to transform them into quantitative with the matrix intersection. By means of a case study, where data were collected via interview with the Administrative Pro-Rector, who takes part at the elaboration of the strategic planning of the institution, it was possible to infer analysis concerning points which must be prioritized at the internal audit work. We emphasize that the priorities were identified when processed in a system (of academic use). From the study we can conclude that, starting from these information systems, audit can identify priorities on its work program. Along with plans and strategic objectives of the enterprise, the internal auditor can define operational procedures to work in favor of the attainment of the objectives of the organization.
Reconstruction of complex signals using minimum Rényi information.
Frieden, B R; Bajkova, A T
1995-07-10
An information divergence, such as Shannon mutual information, measures the distance between two probability-density functions (or images). A wide class of such measures, called α divergences, with desirable properties such as convexity over all space, was defined by Amari. Rényi's information Dα is an α divergence. Because of its convexity property, the minimum of Dα is easily attained. Minimization accomplishes minimum distance (maximum resemblance) between an unknown image and a known reference image. Such a biasing effect permits complex images, such as occur in inverse syntheticaperture- radar imaging, to be well reconstructed. The algorithm permits complex amplitudes to replace the probabilities in the Rényi form. The bias image may be constructed as a smooth version of the linear, Fourier reconstruction of the data. Examples on simulated complex image data with and without noise indicate that the Rényi reconstruction approach permits superresolution in low-noise cases and higher fidelity than ordinary, linear reconstructions in higher-noise cases. PMID:21052233
Three subsets of sequence complexity and their relevance to biopolymeric information.
Abel, David L; Trevors, Jack T
2005-01-01
Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). PMID:16095527
Network algorithms for information analysis using the Titan Toolkit.
McLendon, William Clarence, III; Baumes, Jeffrey; Wilson, Andrew T.; Wylie, Brian Neil; Shead, Timothy M.
2010-07-01
The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.
Patent information - towards simplicity or complexity?
NASA Astrophysics Data System (ADS)
Shenton, Written By Kathleen; Norton, Peter; Onodera, Translated By Natsuo
Since the advent of online services, the ability to search and find chemical patent information has improved immeasurably. Recently, integration of a multitude of files (through file merging as well as cross-file/simultaneous searches), 'intelligent' interfaces and optical technology for large amounts of data seem to achieve greater simplicity and convenience in the retrieval of patent information. In spite of these progresses, there is more essential problem which increases complexity. It is a tendency to expand indefinitely the range of claim for chemical substances by a ultra-generic description of structure (overuse of optional substituents, variable divalent groups, repeating groups, etc.) and long listing of prophetic examples. Not only does this tendency worry producers and searchers of patent databases but also prevents truly worthy inventions in future.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
NASA Astrophysics Data System (ADS)
Ma, Xiaoke; Gao, Lin
2011-05-01
The detection of community structure in complex networks is crucial since it provides insight into the substructures of the whole network. Spectral clustering algorithms that employ the eigenvalues and eigenvectors of an appropriate input matrix have been successfully applied in this field. Despite its empirical success in community detection, spectral clustering has been criticized for its inefficiency when dealing with large scale data sets. This is confirmed by the fact that the time complexity for spectral clustering is cubic with respect to the number of instances; even the memory efficient iterative eigensolvers, such as the power method, may converge slowly to the desired solutions. In efforts to improve the complexity and performance, many non-traditional spectral clustering algorithms have been proposed. Rather than using the real eigenvalues and eigenvectors as in the traditional methods, the non-traditional clusterings employ additional topological structure information characterized by the spectrum of a matrix associated with the network involved, such as the complex eigenvalues and their corresponding complex eigenvectors, eigenspaces and semi-supervised labels. However, to the best of our knowledge, no work has been devoted to comparison among these newly developed approaches. This is the main goal of this paper, through evaluating the effectiveness of these spectral algorithms against some benchmark networks. The experimental results demonstrate that the spectral algorithm based on the eigenspaces achieves the best performance but is the slowest algorithm; the semi-supervised spectral algorithm is the fastest but its performance largely depends on the prior knowledge; and the spectral method based on the complement network shows similar performance to the conventional ones.
On the Time Complexity of Dijkstra's Three-State Mutual Exclusion Algorithm
NASA Astrophysics Data System (ADS)
Kimoto, Masahiro; Tsuchiya, Tatsuhiro; Kikuno, Tohru
In this letter we give a lower bound on the worst-case time complexity of Dijkstra's three-state mutual exclusion algorithm by specifying a concrete behavior of the algorithm. We also show that our result is more accurate than the known best bound.
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
A New Algorithm for Complex Non-Orthogonal Joint Diagonalization Based on Shear and Givens Rotations
NASA Astrophysics Data System (ADS)
Mesloub, Ammar; Abed-Meraim, Karim; Belouchrani, Adel
2014-04-01
This paper introduces a new algorithm to approximate non orthogonal joint diagonalization (NOJD) of a set of complex matrices. This algorithm is based on the Frobenius norm formulation of the JD problem and takes advantage from combining Givens and Shear rotations to attempt the approximate joint diagonalization (JD). It represents a non trivial generalization of the JDi (Joint Diagonalization) algorithm (Souloumiac 2009) to the complex case. The JDi is first slightly modified then generalized to the CJDi (i.e. Complex JDi) using complex to real matrix transformation. Also, since several methods exist already in the literature, we propose herein a brief overview of existing NOJD algorithms then we provide an extensive comparative study to illustrate the effectiveness and stability of the CJDi w.r.t. various system parameters and application contexts.
A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks
Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan
2015-01-01
Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time. PMID:26230697
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
2014-10-01
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363
An algorithm for automatic reduction of complex signal flow graphs
NASA Technical Reports Server (NTRS)
Young, K. R.; Hoberock, L. L.; Thompson, J. G.
1976-01-01
A computer algorithm is developed that provides efficient means to compute transmittances directly from a signal flow graph or a block diagram. Signal flow graphs are cast as directed graphs described by adjacency matrices. Nonsearch computation, designed for compilers without symbolic capability, is used to identify all arcs that are members of simple cycles for use with Mason's gain formula. The routine does not require the visual acumen of an interpreter to reduce the topology of the graph, and it is particularly useful for analyzing control systems described for computer analyses by means of interactive graphics.
Price-Based Information Routing in Complex Satellite Networks for
NASA Astrophysics Data System (ADS)
Hussein, I.; Su, J.; Wang, Y.; Wyglinski, A.
Future space-based situational awareness and space surveillance systems are envisioned to include a large array of satellites that seek to cooperatively achieve full awareness over given space and terrestrial domains. Given the complexity of the communication network architecture of such a system, in this paper we build on the system architecture that was proposed by the presenting author in the 2008 AMOS conference and propose an efficient, adaptable and scalable price-based routing and bandwidth allocation algorithm for the generation, routing and delivery of surveillance information in distributed wireless satellite networks. Due to the potentially large deployments of these satellites, the access points employed in a centralized network control scheme would easily be overwhelmed due to lack of spectral bandwidth, synchronization issues, and multiple access coordination. Alternatively, decentralized schemes could facilitate the flow and transference of information between data gatherers and data collectors via mechanisms such as (multi-hop) routing, allocation of spectral bandwidths per relaying node, and coordination between adjacent nodes. Although there are numerous techniques and concepts focusing on the network operations, control, and management of sensor networks, existing solution approaches require the use of information for routing, allocation, and decision-making that may not be readily available to the satellites in a timely fashion. This is especially true in the literature on price-based routing, where the approach is almost always game theoretic or relies on optimization techniques. Instead of seeking such techniques, in this paper we present algorithms that will (1) be energy-aware, (2) be highly adaptable and responsive to demands and seek delivery of information to desired nodes despite the fact that the source and destination are not globally known, (3) be secure, (4) be efficient in allocating bandwidth, (5) be decentralized and allow for
An Algorithm Combining for Objective Prediction with Subjective Forecast Information
NASA Astrophysics Data System (ADS)
Choi, JunTae; Kim, SooHyun
2016-04-01
As direct or post-processed output from numerical weather prediction (NWP) models has begun to show acceptable performance compared with the predictions of human forecasters, many national weather centers have become interested in automatic forecasting systems based on NWP products alone, without intervention from human forecasters. The Korea Meteorological Administration (KMA) is now developing an automatic forecasting system for dry variables. The forecasts are automatically generated from NWP predictions using a post processing model (MOS). However, MOS cannot always produce acceptable predictions, and sometimes its predictions are rejected by human forecasters. In such cases, a human forecaster should manually modify the prediction consistently at points surrounding their corrections, using some kind of smart tool to incorporate the forecaster's opinion. This study introduces an algorithm to revise MOS predictions by adding a forecaster's subjective forecast information at neighbouring points. A statistical relation between two forecast points - a neighbouring point and a dependent point - was derived for the difference between a MOS prediction and that of a human forecaster. If the MOS prediction at a neighbouring point is updated by a human forecaster, the value at a dependent point is modified using a statistical relationship based on linear regression, with parameters obtained from a one-year dataset of MOS predictions and official forecast data issued by KMA. The best sets of neighbouring points and dependent point are statistically selected. According to verification, the RMSE of temperature predictions produced by the new algorithm was slightly lower than that of the original MOS predictions, and close to the RMSE of subjective forecasts. For wind speed and relative humidity, the new algorithm outperformed human forecasters.
Information filtering in complex weighted networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo
2011-04-01
Many systems in nature, society, and technology can be described as networks, where the vertices are the system’s elements, and edges between vertices indicate the interactions between the corresponding elements. Edges may be weighted if the interaction strength is measurable. However, the full network information is often redundant because tools and techniques from network analysis do not work or become very inefficient if the network is too dense, and some weights may just reflect measurement errors and need to be be discarded. Moreover, since weight distributions in many complex weighted networks are broad, most of the weight is concentrated among a small fraction of all edges. It is then crucial to properly detect relevant edges. Simple thresholding would leave only the largest weights, disrupting the multiscale structure of the system, which is at the basis of the structure of complex networks and ought to be kept. In this paper we propose a weight-filtering technique based on a global null model [Global Statistical Significance (GloSS) filter], keeping both the weight distribution and the full topological structure of the network. The method correctly quantifies the statistical significance of weights assigned independently to the edges from a given distribution. Applications to real networks reveal that the GloSS filter is indeed able to identify relevant connections between vertices.
Information Technology in Complex Health Services
Southon, Frank Charles Gray; Sauer, Chris; Dampney, Christopher Noel Grant (Kit)
1997-01-01
Abstract Objective: To identify impediments to the successful transfer and implementation of packaged information systems through large, divisionalized health services. Design: A case analysis of the failure of an implementation of a critical application in the Public Health System of the State of New South Wales, Australia, was carried out. This application had been proven in the United States environment. Measurements: Interviews involving over 60 staff at all levels of the service were undertaken by a team of three. The interviews were recorded and analyzed for key themes, and the results were shared and compared to enable a continuing critical assessment. Results: Two components of the transfer of the system were considered: the transfer from a different environment, and the diffusion throughout a large, divisionalized organization. The analyses were based on the Scott-Morton organizational fit framework. In relation to the first, it was found that there was a lack of fit in the business environments and strategies, organizational structures and strategy-structure pairing as well as the management process-roles pairing. The diffusion process experienced problems because of the lack of fit in the strategy-structure, strategy-structure-management processes, and strategy-structure-role relationships. Conclusion: The large-scale developments of integrated health services present great challenges to the efficient and reliable implementation of information technology, especially in large, divisionalized organizations. There is a need to take a more sophisticated approach to understanding the complexities of organizational factors than has traditionally been the case. PMID:9067877
Measurement and Information Extraction in Complex Dynamics Quantum Computation
NASA Astrophysics Data System (ADS)
Casati, Giulio; Montangero, Simone
Quantum Information processing has several di.erent applications: some of them can be performed controlling only few qubits simultaneously (e.g. quantum teleportation or quantum cryptography) [1]. Usually, the transmission of large amount of information is performed repeating several times the scheme implemented for few qubits. However, to exploit the advantages of quantum computation, the simultaneous control of many qubits is unavoidable [2]. This situation increases the experimental di.culties of quantum computing: maintaining quantum coherence in a large quantum system is a di.cult task. Indeed a quantum computer is a many-body complex system and decoherence, due to the interaction with the external world, will eventually corrupt any quantum computation. Moreover, internal static imperfections can lead to quantum chaos in the quantum register thus destroying computer operability [3]. Indeed, as it has been shown in [4], a critical imperfection strength exists above which the quantum register thermalizes and quantum computation becomes impossible. We showed such e.ects on a quantum computer performing an e.cient algorithm to simulate complex quantum dynamics [5,6].
Zhang, Jingdan; Jiang, Wuhan; Wang, Ruichun; Wang, Le
2014-09-01
In brain MR images, the noise and low-contrast significantly deteriorate the segmentation results. In this paper, we propose an automatic unsupervised segmentation method integrating dual-tree complex wavelet transform (DT-CWT) with K-mean algorithm for brain MR image. Firstly, a multi-dimensional feature vector is constructed based on the intensity, the low-frequency subband of DT-CWT and spatial position information. Then, a spatial constrained K-mean algorithm is presented as the segmentation system. The proposed method is validated by extensive experiments using both simulated and real T1-weighted MR images, and compared with the state-of-the-art algorithms. PMID:24994513
NASA Astrophysics Data System (ADS)
Sahu, Swagatika; Mohanty, Saumendra; Srivastav, Richa
2013-01-01
Orthogonal Frequency Division Multiplexing (OFDM) is an emerging multi-carrier modulation scheme, which has been adopted for several wireless standards such as IEEE 802.11a and HiperLAN2, etc. A well-known problem of OFDM is its sensitivity to frequency offset between the transmitted and received carrier frequencies. In (OFDM) system Carrier frequency offsets (CFOs) between the transmitter and the receiver destroy the orthogonality between carriers and degrade the system performance significantly. The main problem with frequency offset is that it introduces interference among the multiplicity of carriers in the OFDM signal.The conventional algorithms given by P. Moose and Schmidl describes how carrier frequency offset of an OFDM system can be estimated using training sequences. Simulation results show that the improved carrier frequency offset estimation algorithm which uses a complex training sequence for frequency offset estimation, performs better than conventional P. Moose and Schmidl algorithm, which can effectively improve the frequency estimation accuracy and provides a wide acquisition range for the carrier frequency offset with low complexity. This paper introduces the BER comparisons of different algorithms with the Improved Algorithms for different Real and Complex modulations schemes, considering random carrier offsets . This paper also introduces the BER performances with different CFOs for different Real and Complex modulation schemes for the Improved algorithm.
A novel algorithm for detecting protein complexes with the breadth first search.
Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi
2014-01-01
Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139
On Distribution Reduction and Algorithm Implementation in Inconsistent Ordered Information Systems
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems. PMID:25258721
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
Combining algorithms in automatic detection of QRS complexes in ECG signals.
Meyer, Carsten; Fernández Gavela, José; Harris, Matthew
2006-07-01
QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713
Zaneveld, Jesse R. R.; Thurber, Rebecca L. V.
2014-01-01
Complex symbioses between animal or plant hosts and their associated microbiotas can involve thousands of species and millions of genes. Because of the number of interacting partners, it is often impractical to study all organisms or genes in these host-microbe symbioses individually. Yet new phylogenetic predictive methods can use the wealth of accumulated data on diverse model organisms to make inferences into the properties of less well-studied species and gene families. Predictive functional profiling methods use evolutionary models based on the properties of studied relatives to put bounds on the likely characteristics of an organism or gene that has not yet been studied in detail. These techniques have been applied to predict diverse features of host-associated microbial communities ranging from the enzymatic function of uncharacterized genes to the gene content of uncultured microorganisms. We consider these phylogenetically informed predictive techniques from disparate fields as examples of a general class of algorithms for Hidden State Prediction (HSP), and argue that HSP methods have broad value in predicting organismal traits in a variety of contexts, including the study of complex host-microbe symbioses. PMID:25202302
Information-driven modeling of protein-peptide complexes.
Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2015-01-01
Despite their biological importance in many regulatory processes, protein-peptide recognition mechanisms are difficult to study experimentally at the structural level because of the inherent flexibility of peptides and the often transient interactions on which they rely. Complementary methods like biomolecular docking are therefore required. The prediction of the three-dimensional structure of protein-peptide complexes raises unique challenges for computational algorithms, as exemplified by the recent introduction of protein-peptide targets in the blind international experiment CAPRI (Critical Assessment of PRedicted Interactions). Conventional protein-protein docking approaches are often struggling with the high flexibility of peptides whose short sizes impede protocols and scoring functions developed for larger interfaces. On the other side, protein-small ligand docking methods are unable to cope with the larger number of degrees of freedom in peptides compared to small molecules and the typically reduced available information to define the binding site. In this chapter, we describe a protocol to model protein-peptide complexes using the HADDOCK web server, working through a test case to illustrate every steps. The flexibility challenge that peptides represent is dealt with by combining elements of conformational selection and induced fit molecular recognition theories. PMID:25555727
A fast D.F.T. algorithm using complex integer transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1978-01-01
Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
Wan, Xinwang; Liang, Juan
2013-01-01
This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Renyi complexities and information planes: Atomic structure in conjugated spaces
NASA Astrophysics Data System (ADS)
Antolín, J.; López-Rosa, S.; Angulo, J. C.
2009-05-01
Generalized Renyi complexity measures are defined and numerically analyzed for atomic one-particle densities in both conjugated spaces. These complexities provide, as particular cases, the previously known statistical and Fisher-Shannon complexities. The generalized complexities provide information on the atomic shell structure and shell-filling patterns, allowing to appropriately weight different regions of the electronic cloud.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
An Introduction to Genetic Algorithms and to Their Use in Information Retrieval.
ERIC Educational Resources Information Center
Jones, Gareth; And Others
1994-01-01
Genetic algorithms, a class of nondeterministic algorithms in which the role of chance makes the precise nature of a solution impossible to guarantee, seem to be well suited to combinatorial-optimization problems in information retrieval. Provides an introduction to techniques and characteristics of genetic algorithms and illustrates their…
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2016-02-01
To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.
NASA Astrophysics Data System (ADS)
Chen, Lei; Li, Dehua; Yang, Jie
2007-12-01
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.
Do the Visual Complexity Algorithms Match the Generalization Process in Geographical Displays?
NASA Astrophysics Data System (ADS)
Brychtová, A.; Çöltekin, A.; Pászto, V.
2016-06-01
In this study, we first develop a hypothesis that existing quantitative visual complexity measures will overall reflect the level of cartographic generalization, and test this hypothesis. Specifically, to test our hypothesis, we first selected common geovisualization types (i.e., cartographic maps, hybrid maps, satellite images and shaded relief maps) and retrieved examples as provided by Google Maps, OpenStreetMap and SchweizMobil by swisstopo. Selected geovisualizations vary in cartographic design choices, scene contents and different levels of generalization. Following this, we applied one of Rosenholtz et al.'s (2007) visual clutter algorithms to obtain quantitative visual complexity scores for screenshots of the selected maps. We hypothesized that visual complexity should be constant across generalization levels, however, the algorithm suggested that the complexity of small-scale displays (less detailed) is higher than those of large-scale (high detail). We also observed vast differences in visual complexity among maps providers, which we attribute to their varying approaches towards the cartographic design and generalization process. Our efforts will contribute towards creating recommendations as to how the visual complexity algorithms could be optimized for cartographic products, and eventually be utilized as a part of the cartographic design process to assess the visual complexity.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-04-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm
He, Xiaoqi; Zheng, Zizhao; Hu, Chao
2015-01-01
The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg–Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher “denoise” capacity, with a larger range for initial guess values. PMID:25914561
Fast algorithm for minutiae matching based on multiple-ridge information
NASA Astrophysics Data System (ADS)
Wang, Guoyou; Hu, Jing
2001-09-01
Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.
Applications of the complexity space to the General Probabilistic Divide and Conquer Algorithms
NASA Astrophysics Data System (ADS)
García-Raffi, L. M.; Romaguera, S.; Schellekens, M. P.
2008-12-01
Schellekens [M. Schellekens, The Smyth completion: A common foundation for denotational semantics and complexity analysis, in: Proc. MFPS 11, in: Electron. Notes Theor. Comput. Sci., vol. 1, 1995, pp. 535-556], and Romaguera and Schellekens [S. Romaguera, M. Schellekens, Quasi-metric properties of complexity spaces, Topology Appl. 98 (1999) 311-322] introduced a topological foundation to obtain complexity results through the application of Semantic techniques to Divide and Conquer Algorithms. This involved the fact that the complexity (quasi-metric) space is Smyth complete and the use of a version of the Banach fixed point theorem and improver functionals. To further bridge the gap between Semantics and Complexity, we show here that these techniques of analysis, based on the theory of complexity spaces, extend to General Probabilistic Divide and Conquer schema discussed by Flajolet [PE Flajolet, Analytic analysis of algorithms, in: W. Kuich (Ed.), 19th Internat. Colloq. ICALP'92, Vienna, July 1992; Automata, Languages and Programming, in: Lecture Notes in Comput. Sci., vol. 623, 1992, pp. 186-210]. In particular, we obtain a general method which is useful to show that for several recurrence equations based on the recursive structure of General Probabilistic Divide and Conquer Algorithms, the associated functionals have a unique fixed point which is the solution for the corresponding recurrence equation.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
a Simple Algorithm to Enforce Dirichlet Boundary Conditions in Complex Geometries
NASA Astrophysics Data System (ADS)
Huber, Christian; Dufek, Josef; Chopard, Bastien
We present a new algorithm to implement Dirichlet boundary conditions for diffusive processes in arbitrarily complex geometries. In this approach, the boundary conditions around the diffusing object is replaced by the fictitious phase transition of a pure substance where the energy cost of the phase transition largely overwhelms the amount of energy stored in the system. The computing cost of this treatment of the boundary condition is independent of the topology of the boundary. Moreover, the implementation of this new approach is straightforward and follows naturally from enthalpy-based numerical methods. This algorithm is compatible with a wide variety of discretization methods, finite differences, finite volume, lattice Boltzmann methods and finite elements, to cite a few. We show, here, using both lattice Boltzmann and finite-volume methods that our model is in excellent agreement with analytical solutions for high symmetry geometries. We also illustrate the advantages of the algorithm to handle more complex geometries.
Quantifying networks complexity from information geometry viewpoint
Felice, Domenico Mancini, Stefano; Pettini, Marco
2014-04-15
We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy on simplexes and find that it monotonically increases with their dimension.
Can complexity science inform physician leadership development?
Grady, Colleen Marie
2016-07-01
Purpose The purpose of this paper is to describe research that examined physician leadership development using complexity science principles. Design/methodology/approach Intensive interviewing of 21 participants and document review provided data regarding physician leadership development in health-care organizations using five principles of complexity science (connectivity, interdependence, feedback, exploration-of-the-space-of-possibilities and co-evolution), which were grouped in three areas of inquiry (relationships between agents, patterns of behaviour and enabling functions). Findings Physician leaders are viewed as critical in the transformation of healthcare and in improving patient outcomes, and yet significant challenges exist that limit their development. Leadership in health care continues to be associated with traditional, linear models, which are incongruent with the behaviour of a complex system, such as health care. Physician leadership development remains a low priority for most health-care organizations, although physicians admit to being limited in their capacity to lead. This research was based on five principles of complexity science and used grounded theory methodology to understand how the behaviours of a complex system can provide data regarding leadership development for physicians. The study demonstrated that there is a strong association between physician leadership and patient outcomes and that organizations play a primary role in supporting the development of physician leaders. Findings indicate that a physician's relationship with their patient and their capacity for innovation can be extended as catalytic behaviours in a complex system. The findings also identified limiting factors that impact physicians who choose to lead, such as reimbursement models that do not place value on leadership and medical education that provides minimal opportunity for leadership skill development. Practical Implications This research provides practical
Star pattern recognition algorithm aided by inertial information
NASA Astrophysics Data System (ADS)
Liu, Bao; Wang, Ke-dong; Zhang, Chao
2011-08-01
Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
HKC: an algorithm to predict protein complexes in protein-protein interaction networks.
Wang, Xiaomin; Wang, Zhengzhi; Ye, Jun
2011-01-01
With the availability of more and more genome-scale protein-protein interaction (PPI) networks, research interests gradually shift to Systematic Analysis on these large data sets. A key topic is to predict protein complexes in PPI networks by identifying clusters that are densely connected within themselves but sparsely connected with the rest of the network. In this paper, we present a new topology-based algorithm, HKC, to detect protein complexes in genome-scale PPI networks. HKC mainly uses the concepts of highest k-core and cohesion to predict protein complexes by identifying overlapping clusters. The experiments on two data sets and two benchmarks show that our algorithm has relatively high F-measure and exhibits better performance compared with some other methods. PMID:22174556
Pitschner, H F; Berkowitsch, A
2001-01-01
Symbolic dynamics as a non linear method and computation of the normalized algorithmic complexity (C alpha) was applied to basket-catheter mapping of atrial fibrillation (AF) in the right human atrium. The resulting different degrees of organisation of AF have been compared to conventional classification of Wells. Short time temporal and spatial distribution of the C alpha during AF and effects of propafenone on this distribution have been investigated in 30 patients. C alpha was calculated for a moving window. Generated C alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C alpha distribution was statistically analyzed. Inter-regional C alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C alpha increase in cranio-caudal direction was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity. PMID:11889958
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). PMID:25442399
Face detection in complex background based on Adaboost algorithm and YCbCr skin color model
NASA Astrophysics Data System (ADS)
Ge, Wei; Han, Chunling; Quan, Wei
2015-12-01
Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.
Li, Peng; He, Tingting; Hu, Xiaohua; Zhao, Junmin; Shen, Xianjun; Zhang, Ming; Wang, Yan
2014-06-01
A novel algorithm based on Connected Affinity Clique Extension (CACE) for mining overlapping functional modules in protein interaction network is proposed in this paper. In this approach, the value of protein connected affinity which is inferred from protein complexes is interpreted as the reliability and possibility of interaction. The protein interaction network is constructed as a weighted graph, and the weight is dependent on the connected affinity coefficient. The experimental results of our CACE in two test data sets show that the CACE can detect the functional modules much more effectively and accurately when compared with other state-of-art algorithms CPM and IPC-MCE. PMID:24803142
Chiou, Yih-Shyh; Tsai, Fuan
2014-06-01
This paper presents a low-complexity and high-accuracy algorithm to reduce the computational load of the traditional data-fusion algorithm with heterogeneous observations for location tracking. For the location-estimation technique with the data fusion of radio-based ranging measurement and speed-based sensing measurement, the proposed tracking scheme, based on the Bayesian filtering concept, is handled by a state space model. The location tracking problem is divided into many mutual-interaction local constraints with the inherent message- passing features of factor graphs. During each iteration cycle, the messages with reliable information are passed efficiently between the prediction phase and the correction phase to simplify the data-fusion implementation for tracking the location of the mobile terminal. Numerical simulations show that the proposed forward and one-step backward refining tracking approach that combines radio ranging with speed sensing measurements for data fusion not only can achieve an accurate location close to that of the traditional Kalman filtering data-fusion algorithm, but also has much lower computational complexity. PMID:24013831
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Lee, S. H.; van der Werf, J. H. J.
2016-01-01
Summary: We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method is substantially faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss the advantages and limitations. Availability and implementation: MTG2 is available in https://sites.google.com/site/honglee0707/mtg2. Contact: hong.lee@une.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755623
Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.
Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S
2013-01-01
The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms. PMID:24010024
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
A Survey of Stemming Algorithms in Information Retrieval
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime
2014-01-01
Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…
Infrared image non-rigid registration based on regional information entropy demons algorithm
NASA Astrophysics Data System (ADS)
Lu, Chaoliang; Ma, Lihua; Yu, Ming; Cui, Shumin; Wu, Qingrong
2015-02-01
Infrared imaging fault detection which is treated as an ideal, non-contact, non-destructive testing method is applied to the circuit board fault detection. Since Infrared images obtained by handheld infrared camera with wide-angle lens have both rigid and non-rigid deformations. To solve this problem, a new demons algorithm based on regional information entropy was proposed. The new method overcame the shortcomings of traditional demons algorithm that was sensitive to the intensity. First, the information entropy image was gotten by computing regional information entropy of the image. Then, the deformation between the two images was calculated that was the same as demons algorithm. Experimental results demonstrated that the proposed algorithm has better robustness in intensity inconsistent images registration compared with the traditional demons algorithm. Achieving accurate registration between intensity inconsistent infrared images provided strong support for the temperature contrast.
A novel algorithm for simplification of complex gene classifiers in cancer
Wilson, Raphael A.; Teng, Ling; Bachmeyer, Karen M.; Bissonnette, Mei Lin Z.; Husain, Aliya N.; Parham, David M.; Triche, Timothy J.; Wing, Michele R.; Gastier-Foster, Julie M.; Barr, Frederic G.; Hawkins, Douglas S.; Anderson, James R.; Skapek, Stephen X.; Volchenboum, Samuel L.
2013-01-01
The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing fifty-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct data sets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to demonstrate the generalizability of our algorithm, we applied it to a lung cancer data set to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use. PMID:23913937
Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding
NASA Astrophysics Data System (ADS)
Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan
2011-01-01
This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.
ERIC Educational Resources Information Center
Chen, Hsinchun
1995-01-01
Presents an overview of artificial-intelligence-based inductive learning techniques and their use in information science research. Three methods are discussed: the connectionist Hopfield network; the symbolic ID3/ID5R; evolution-based genetic algorithms. The knowledge representations and algorithms of these methods are examined in the context of…
Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.
Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector
2016-03-01
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology. PMID:25761393
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit
Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph
2015-09-01
The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.
Mutual information image registration based on improved bee evolutionary genetic algorithm
NASA Astrophysics Data System (ADS)
Xu, Gang; Tu, Jingzhi
2009-07-01
In recent years, the mutual information is regarded as a more efficient similarity metrics in the image registration. According to the features of mutual information image registration, the Bee Evolution Genetic Algorithm (BEGA) is chosen for optimizing parameters, which imitates swarm mating. Besides, we try our best adaptively set the initial parameters to improve the BEGA. The programming result shows the wonderful precision of the algorithm.
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
An ant colony based algorithm for overlapping community detection in complex networks
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di
2015-06-01
Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.
Measurement of Information-Based Complexity in Listening.
ERIC Educational Resources Information Center
Bishop, Walton B.
When people say that what they hear is "over their heads," they are describing a severe information-based complexity (I-BC) problem. They cannot understand what is said because some of the information needed is missing, contaminated, and/or costly to obtain. Students often face these I-BC problems, and teachers often exacerbate them. Yet listeners…
A novel approach to characterize information radiation in complex networks
NASA Astrophysics Data System (ADS)
Wang, Xiaoyang; Wang, Ying; Zhu, Lin; Li, Chao
2016-06-01
The traditional research of information dissemination is mostly based on the virus spreading model that the information is being spread by probability, which does not match very well to the reality, because the information that we receive is always more or less than what was sent. In order to quantitatively describe variations in the amount of information during the spreading process, this article proposes a safety information radiation model on the basis of communication theory, combining with relevant theories of complex networks. This model comprehensively considers the various influence factors when safety information radiates in the network, and introduces some concepts from the communication theory perspective, such as the radiation gain function, receiving gain function, information retaining capacity and information second reception capacity, to describe the safety information radiation process between nodes and dynamically investigate the states of network nodes. On a micro level, this article analyzes the influence of various initial conditions and parameters on safety information radiation through the new model simulation. The simulation reveals that this novel approach can reflect the variation of safety information quantity of each node in the complex network, and the scale-free network has better "radiation explosive power", while the small-world network has better "radiation staying power". The results also show that it is efficient to improve the overall performance of network security by selecting nodes with high degrees as the information source, refining and simplifying the information, increasing the information second reception capacity and decreasing the noises. In a word, this article lays the foundation for further research on the interactions of information and energy between internal components within complex systems.
NASA Astrophysics Data System (ADS)
Yan, Menglong; Blaschke, Thomas; Tang, Hongzhao; Xiao, Chenchao; Sun, Xian; Zhang, Daobing; Fu, Kun
2016-03-01
Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Generating seed points is an initial step for most filtering algorithms, whereas existing algorithms usually define a regular window size to generate seed points. This may lead to an inadequate density of seed points, and further introduce error type I, especially in steep terrain and forested areas. In this study, we propose the use of objectbased analysis to derive surface complexity information from ALS datasets, which can then be used to improve seed point generation.We assume that an area is complex if it is composed of many small objects, with no buildings within the area. Using these assumptions, we propose and implement a new segmentation algorithm based on a grid index, which we call the Edge and Slope Restricted Region Growing (ESRGG) algorithm. Surface complexity information is obtained by statistical analysis of the number of objects derived by segmentation in each area. Then, for complex areas, a smaller window size is defined to generate seed points. Experimental results show that the proposed algorithm could greatly improve the filtering results in complex areas, especially in steep terrain and forested areas.
NASA Astrophysics Data System (ADS)
Khorasanizade, Sh.; Sousa, J. M. M.
2016-03-01
A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Huber, Felix
2015-10-01
Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded
An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network
Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang
2014-01-01
Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
Information Center Complex publications and presentations, 1971-1980
Gill, A.B.; Hawthorne, S.W.
1981-08-01
This indexed bibliography lists publications and presentations of the Information Center Complex, Information Division, Oak Ridge National Laboratory, from 1971 through 1980. The 659 entries cover such topics as toxicology, air and water pollution, management and transportation of hazardous wastes, energy resources and conservation, and information science. Publications range in length from 1 page to 3502 pages and include topical reports, books, journal articles, fact sheets, and newsletters. Author, title, and group indexes are provided. Annual updates are planned.
Non-algorithmic access to calendar information in a calendar calculator with autism.
Mottron, L; Lemmens, K; Gagnon, L; Seron, X
2006-02-01
The possible use of a calendar algorithm was assessed in DBC, an autistic "savant" of normal measured intelligence. Testing of all the dates in a year revealed a random distribution of errors. Re-testing DBC on the same dates one year later shows that his errors were not stable across time. Finally, DBC was able to answer "reversed" questions that cannot be solved by a classical algorithm. These findings favor a non-algorithmic retrieval of calendar information. It is proposed that multidirectional, non-hierarchical retrieval of information, and solving problems in a non-algorithmic way, are involved in savant performances. The possible role of a functional rededication of low-level perceptual systems to the processing of symbolic information in savants is discussed. PMID:16453069
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of
NASA Astrophysics Data System (ADS)
Jo, Sunhwan; Jiang, Wei
2015-12-01
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.
Research on non rigid registration algorithm of DCE-MRI based on mutual information and optical flow
NASA Astrophysics Data System (ADS)
Yu, Shihua; Wang, Rui; Wang, Kaiyu; Xi, Mengmeng; Zheng, Jiashuo; Liu, Hui
2015-07-01
Image matching plays a very important role in the field of medical image, while the two image registration methods based on the mutual information and the optical flow are very effective. The experimental results show that the two methods have their prominent advantages. The method based on mutual information is good for the overall displacement, while the method based on optical flow is very sensitive to small deformation. In the breast DCE-MRI images studied in this paper, there is not only overall deformation caused by the patient, but also non rigid small deformation caused by respiratory deformation. In view of the above situation, the single-image registration algorithms cannot meet the actual needs of complex situations. After a comprehensive analysis to the advantages and disadvantages of these two methods, this paper proposes a registration algorithm of combining mutual information with optical flow field, and applies subtraction images of the reference image and the floating image as the main criterion to evaluate the registration effect, at the same time, applies the mutual information between image sequence values as auxiliary criterion. With the test of the example, this algorithm has obtained a better accuracy and reliability in breast DCE-MRI image sequences.
Complexity transitions in global algorithms for sparse linear systems over finite fields
NASA Astrophysics Data System (ADS)
Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.
2002-09-01
We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.
2014-01-01
Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073
Using multiple perspectives to suppress information and complexity
Kelsey, R.L. |; Webster, R.B.; Hartley, R.T.
1998-09-01
Dissemination of battlespace information involves getting information to particular warfighters that is both useful and in a form that facilitates the tasks of those particular warfighters. There are two issues which motivate this problem of dissemination. The first issue deals with disseminating pertinent information to a particular warfighter. This can be thought of as information suppression. The second issue deals with facilitating the use of the information by tailoring the computer interface to the specific tasks of an individual warfighter. This can be thought of as interface complexity suppression. This paper presents a framework for suppressing information using an object-based knowledge representation methodology. This methodology has the ability to represent knowledge and information in multiple perspectives. Information can be suppressed by creating a perspective specific to an individual warfighter. In this way, only the information pertinent and useful to a warfighter is made available to that warfighter. Information is not removed, lost, or changed, but spread among multiple perspectives. Interface complexity is managed in a similar manner. Rather than have one generalized computer interface to access all information, the computer interface can be divided into interface elements. Interface elements can then be selected and arranged into a perspective-specific interface. This is done in a manner to facilitate completion of tasks contained in that perspective. A basic battlespace domain containing ground and air elements and associated warfighters is used to exercise the methodology.
Kim, Jinkwon; Shin, Hangsik
2016-01-01
The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
Kim, Jinkwon; Shin, Hangsik
2016-01-01
The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949
Representing Uncertain Geographical Information with Algorithmic Map Caricatures
NASA Astrophysics Data System (ADS)
Brunsdon, Chris
2016-04-01
A great deal of geographical information - including the results ion data analysis - is imprecise in some way. For example the the results of geostatistical interpolation should consist not only of point estimates of the value of some quantity at points in space, but also of confidence intervals or standard errors of these estimators. Similarly, mappings of contour lines derived form such interpolations will also be characterised by uncertainty. However, most computerized cartography tools are designed to provide 'crisp' representations of geographical information, such as sharply drawn lines, or clearly delineated areas. In this talk, the use of 'fuzzy' or 'sketchy' cartographic tools will be demonstrated - where maps have a hand-drawn appearance and the degree of 'roughness' and other related characteristics can be used to convey the degree of uncertainty associated with estimated quantities being mapped. The tools used to do this are available as an R package, which will be described in the talk.
A Lip Extraction Algorithm by Using Color Information Considering Obscurity
NASA Astrophysics Data System (ADS)
Shirasawa, Yoichi; Nishida, Makoto
This paper proposes a method for extracting lip shape and its location from sequential facial images by using color information. The proposed method has no need of extra information on a position nor a form in advance. It is also carried out without special conditions such as lipstick or lighting. Psychometric quantities of a metric hue angle, a metric hue difference and a rectangular coordinates, which are defined in CIE 1976 L*a*b* color space, are used for the extraction. The method employs fuzzy reasoning in order to consider obscurity in image data such as shade on the face. The experimental result indicate the effectiveness of the proposed method; 100 percent of facial images data was estimated a lip’s position, and about 94 percent of facial images data was extracted its shape.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems
NASA Astrophysics Data System (ADS)
Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing
2014-01-01
Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.
Manikandan, P; Ramyachitra, D; Banupriya, D
2016-04-15
Proteins show their functional activity by interacting with other proteins and forms protein complexes since it is playing an important role in cellular organization and function. To understand the higher order protein organization, overlapping is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Most of the clustering algorithms do not consider the weighted as well as overlapping complexes. In this research, Prorank based Fuzzy algorithm has been proposed to find the overlapping protein complexes. The Fuzzy detection algorithm is incorporated in the Prorank algorithm after ranking step to find the overlapping community. The proposed algorithm executes in an iterative manner to compute the probability of robust clusters. The proposed and the existing algorithms were tested on different datasets such as PPI-D1, PPI-D2, Collins, DIP, Krogan Core and Krogan-Extended, gene expression such as GSE7645, GSE22269, GSE26923, pathways such as Meiosis, MAPK, Cell Cycle, phenotypes such as Yeast Heterogeneous and Yeast Homogeneous datasets. The experimental results show that the proposed algorithm predicts protein complexes with better accuracy compared to other state of art algorithms. PMID:26809099
Deconvolution of complex spectra into components by the bee swarm algorithm
NASA Astrophysics Data System (ADS)
Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh
2016-05-01
The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.
Simple algorithm for computing the communication complexity of quantum communication processes
NASA Astrophysics Data System (ADS)
Hansen, A.; Montina, A.; Wolf, S.
2016-04-01
A two-party quantum communication process with classical inputs and outcomes can be simulated by replacing the quantum channel with a classical one. The minimal amount of classical communication required to reproduce the statistics of the quantum process is called its communication complexity. In the case of many instances simulated in parallel, the minimal communication cost per instance is called the asymptotic communication complexity. Previously, we reduced the computation of the asymptotic communication complexity to a convex minimization problem. In most cases, the objective function does not have an explicit analytic form, as the function is defined as the maximum over an infinite set of convex functions. Therefore, the overall problem takes the form of a minimax problem and cannot directly be solved by standard optimization methods. In this paper, we introduce a simple algorithm to compute the asymptotic communication complexity. For some special cases with an analytic objective function one can employ available convex-optimization libraries. In the tested cases our method turned out to be notably faster. Finally, using our method we obtain 1.238 bits as a lower bound on the asymptotic communication complexity of a noiseless quantum channel with the capacity of 1 qubit. This improves the previous bound of 1.208 bits.
A Selective Encryption Algorithm Based on AES for Medical Information
Oh, Ju-Young; Chon, Ki-Hwan
2010-01-01
Objectives The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. Methods We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C++ and it was compiled with Code::Blocks using a MinGW GCC compiler. Results The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. Conclusions As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest. PMID:21818420
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm
Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin
2015-01-01
With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071
IR and visual image registration based on mutual information and PSO-Powell algorithm
NASA Astrophysics Data System (ADS)
Zhuang, Youwen; Gao, Kun; Miu, Xianghu
2014-11-01
Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.
2015-01-01
Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein
A multi-agent genetic algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Li, Zhangtao; Liu, Jing
2016-05-01
Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.
Toward a Deterministic Polynomial Time Algorithm with Optimal Additive Query Complexity
NASA Astrophysics Data System (ADS)
Bshouty, Nader H.; Mazzawi, Hanna
In this paper, we study two combinatorial search problems: The coin weighing problem with a spring scale (also known as the vector reconstructing problem using additive queries) and the problem of reconstructing weighted graphs using additive queries. Suppose we are given n identical looking coins. Suppose that m out of the n coins are counterfeit and the rest are authentic. Assume that we are allowed to weigh subsets of coins with a spring scale. It is known that the optimal number of weighing for identifying the counterfeit coins and their weights is at least Ω(mlog n/log m). We give a deterministic polynomial time adaptive algorithm for identifying the counterfeit coins and their weights using O(mlog n/log m+ mlog log m) weighings, assuming that the weight of the counterfeit coins are greater than the weight of the authentic coin. This algorithm is optimal when m ≤ n c/loglogn , where c is any constant. Also our weighing complexity is within loglogm times the optimal complexity for all m.
An algorithm to find critical execution paths of software based on complex network
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Bing; Ren, Rong; Ren, Jiadong
2015-01-01
The critical execution paths play an important role in software system in terms of reducing the numbers of test date, detecting the vulnerabilities of software structure and analyzing software reliability. However, there are no efficient methods to discover them so far. Thus in this paper, a complex network-based software algorithm is put forward to find critical execution paths (FCEP) in software execution network. First, by analyzing the number of sources and sinks in FCEP, software execution network is divided into AOE subgraphs, and meanwhile, a Software Execution Network Serialization (SENS) approach is designed to generate execution path set in each AOE subgraph, which not only reduces ring structure's influence on path generation, but also guarantees the nodes' integrity in network. Second, according to a novel path similarity metric, similarity matrix is created to calculate the similarity among sets of path sequences. Third, an efficient method is taken to cluster paths through similarity matrices, and the maximum-length path in each cluster is extracted as the critical execution path. At last, a set of critical execution paths is derived. The experimental results show that the FCEP algorithm is efficient in mining critical execution path under software complex network.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Algorithmic information theory and the hidden variable question
NASA Astrophysics Data System (ADS)
Fuchs, Christopher
1992-02-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Algorithmic information theory and the hidden variable question
NASA Technical Reports Server (NTRS)
Fuchs, Christopher
1992-01-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Deciphering the Minimal Algorithm for Development and Information-genesis
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Tang, Chao; Li, Hao
During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization. PMID:25706894
Dynamics of rumor-like information dissemination in complex networks
NASA Astrophysics Data System (ADS)
Nekovee, Maziar; Moreno, Yamir; Bianconi, Ginestra
2005-03-01
An important dynamic process that takes place in complex networks is the spreading of information via rumor-like mechanisms. In addition to their relevance to propagation of rumors and fads in human society, such mechanism are also the basis of an important class of collective communication protocols in complex computer networks, such as the Internet and the peer-to-peer systems. In this talk we present results of our analytical, numerical and large-scale Monte Carlo simulation studies of this process on several classes of complex networks, including random graphs, scale-free networks, and random and small-world topological graphs. Our studies point out to important differences between the dynamics of rumor spreading and that of virus spreading in such networks, and provide new insights into the complex interplay between the spreading phenomena and network topology.
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
NASA Astrophysics Data System (ADS)
Vijayan, S.; Vani, K.; Sanjeevi, S.
2013-09-01
This study presents the development and implementation of an algorithm for automatic detection, classification and contextual information such as ejecta and the status of degradation of the lunar craters using SELENE panchromatic images. This algorithm works by a three-step process; first, the algorithm detects the simple lunar craters and classifies them into round/flat-floor using the structural profile pattern. Second, it extracts contextual information (ejecta) and notifies their presence if any, and associates it to the corresponding crater using the role of adjacency rule and the Markov random field theory. Finally, the algorithm examines each of the detected craters and assesses its state of degradation using the intensity variation over the crater edge. We applied the algorithm to 16 technically demanding test sites, which were chosen in a manner to represent all possible lunar surface conditions. Crater detection algorithm evaluation was carried out by means of manual analysis for their accuracy in detection, classification, ejecta and degraded-state identification along with a detailed qualitative assessment. The manual analysis depicts that the results are in agreement with the detection, while the overall statistical results reveal the detection performance as: Q ∼ 75% and precision ∼0.83. The results of detection and classification reveal that the simple lunar craters are dominated by the round-floor type rather than flat-floor type. In addition, the results also depicts that the lunar surface is predominant with sub-kilometer craters of lesser depth.
PREFACE: Complex Networks: from Biology to Information Technology
NASA Astrophysics Data System (ADS)
Barrat, A.; Boccaletti, S.; Caldarelli, G.; Chessa, A.; Latora, V.; Motter, A. E.
2008-06-01
The field of complex networks is one of the most active areas in contemporary statistical physics. Ten years after seminal work initiated the modern study of networks, interest in the field is in fact still growing, as indicated by the ever increasing number of publications in network science. The reason for such a resounding success is most likely the simplicity and broad significance of the approach that, through graph theory, allows researchers to address a variety of different complex systems within a common framework. This special issue comprises a selection of contributions presented at the workshop 'Complex Networks: from Biology to Information Technology' held in July 2007 in Pula (Cagliari), Italy as a satellite of the general conference STATPHYS23. The contributions cover a wide range of problems that are currently among the most important questions in the area of complex networks and that are likely to stimulate future research. The issue is organised into four sections. The first two sections describe 'methods' to study the structure and the dynamics of complex networks, respectively. After this methodological part, the issue proceeds with a section on applications to biological systems. The issue closes with a section concentrating on applications to the study of social and technological networks. The first section, entitled Methods: The Structure, consists of six contributions focused on the characterisation and analysis of structural properties of complex networks: The paper Motif-based communities in complex networks by Arenas et al is a study of the occurrence of characteristic small subgraphs in complex networks. These subgraphs, known as motifs, are used to define general classes of nodes and their communities by extending the mathematical expression of the Newman-Girvan modularity. The same line of research, aimed at characterising network structure through the analysis of particular subgraphs, is explored by Bianconi and Gulbahce in Algorithm
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and
Leliaert, Frederik; Verbruggen, Heroen; Wysor, Brian; De Clerck, Olivier
2009-10-01
DNA-based taxonomy provides a convenient and reliable tool for species delimitation, especially in organisms in which morphological discrimination is difficult or impossible, such as many algal taxa. A group with a long history of confusing species circumscriptions is the morphologically plastic Boodlea complex, comprising the marine green algal genera Boodlea, Cladophoropsis, Phyllodictyon and Struveopsis. In this study, we elucidate species boundaries in the Boodlea complex by analysing nrDNA internal transcribed spacer sequences from 175 specimens collected from a wide geographical range. Algorithmic methods of sequence-based species delineation were applied, including statistical parsimony network analysis, and a maximum likelihood approach that uses a mixed Yule-coalescent model and detects species boundaries based on differences in branching rates at the level of species and populations. Sequence analyses resulted in the recognition of 13 phylogenetic species, although we failed to detect sharp species boundaries, possibly as a result of incomplete reproductive isolation. We found considerable conflict between traditional and phylogenetic species definitions. Identical morphological forms were distributed in different clades (cryptic diversity), and at the same time most of the phylogenetic species contained a mixture of different morphologies (indicating intraspecific morphological variation). Sampling outside the morphological range of the Boodlea complex revealed that the enigmatic, sponge-associated Cladophoropsis (Spongocladia) vaucheriiformis, also falls within the Boodlea complex. Given the observed evolutionary complexity and nomenclatural problems associated with establishing a Linnaean taxonomy for this group, we propose to discard provisionally the misleading morphospecies and genus names, and refer to clade numbers within a single genus, Boodlea. PMID:19524052
BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes.
Lakizadeh, Amir; Jalili, Saeed
2016-01-01
Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706
BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes
Lakizadeh, Amir; Jalili, Saeed
2016-01-01
Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706
Information and complexity measures in molecular reactivity studies.
Welearegay, Meressa A; Balawender, Robert; Holas, Andrzej
2014-07-28
The analysis of the information and complexity measures as tools for the investigation of the chemical reactivity has been done in the spin-position and the position spaces, for the density and shape representations. The concept of the transferability and additivity of atoms or functional groups were used as "checkpoints" in the analysis of obtained results. The shape function as an argument of various measures reveals less information than the spinor density. Use of the shape function can yield wrong conclusions when the information measures such as the Shannon entropy (SE, S), the Fisher information (FI, I), the Onicescu information (OI, D), and complexities based on them are used for the systems with different electron numbers. Results obtained in the spinor-density representation show the transferability and additivity (while lacking in the case of the shape representation). The group transferability is well illustrated in the example of the X-Y molecules and their benzene derivatives. Another example is the methyl group transferability presented on the alkane-alkene-alkyne set. Analysis of the results displayed on planes between the three information-theoretical (IT) based measures has shown that the S-I plane provides "richer" information about the pattern, organization, similarity of used molecules than the I-D and D-S planes. The linear relation of high accuracy is noted between the kinetic energy and the FI and the OI measures. Another interesting regression was found between the atomization total energy and the atomization entropy. Unfortunately, the lack of the group electronic energy transferability indicates that no general relations between the IT measures and the chemical reactivity indices are observed. The molecular set chosen for the study includes different types of molecules with various functional groups (19 groups). The used set is large enough (more than 700 molecules) and diverse to improve the previous understating of molecular complexities
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Li, Bin
2016-03-01
Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.
Developments of global greenhouse gas retrieval algorithm using Aerosol information from GOSAT-CAI
NASA Astrophysics Data System (ADS)
Kim, Woogyung; kim, Jhoon; Jung, Yeonjin; lee, Hanlim; Boesch, Hartmut
2014-05-01
Human activities have resulted in increasing atmospheric CO2 concentration since the beginning of Industrial Revolution to reaching CO2 concentration over 400 ppm at Mauna Loa observatory for the first time. (IPCC, 2007). However, our current knowledge of carbon cycle is still insufficient due to lack of observations. Satellite measurement is one of the most effective approaches to improve the accuracy of carbon source and sink estimates by monitoring the global CO2 distributions with high spatio-temporal resolutions (Rayner and O'Brien, 2001; Houweling et al., 2004). Currently, GOSAT has provided valuable information to observe global CO2 trend, enables our extended understanding of CO2 and preparation for future satellite plan. However, due to its physical limitation, GOSAT CO2 retrieval results have low spatial resolution and cannot cover wide area. Another obstruction of GOSAT CO2 retrieval is low data availability mainly due to contamination by clouds and aerosols. Especially, in East Asia, one of the most important aerosol source areas, it is hard to have successful retrieval result due to high aerosol concentration. The main purpose of this study is to improve data availability of GOSAT CO2 retrieval. In this study, current state of CO2 retrieval algorithm development is introduced and preliminary results are shown. This algorithm is based on optimal estimation method and utilized VLIDORT the vector discrete ordinate radiative transfer model. This proto type algorithm, developed from various combinations of state vectors to find accurate CO2 concentration, shows reasonable result. Especially the aerosol retrieval algorithm using GOSAT-CAI measurements, which provide aerosol information for the same area with GOSAT-FTS measurements, are utilized as input data of CO2 retrieval. Other CO2 retrieval algorithms use chemical transport model result or climatologically expected values as aerosol information which is the main reason of low data availability. With
A Patched-Grid Algorithm for Complex Configurations Directed Towards the F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Thomas, James L.; Walters, Robert W.; Reu, Taekyu; Ghaffari, Farhad; Weston, Robert P.; Luckring, James M.
1989-01-01
A patched-grid algorithm for the analysis of complex configurations with an implicit, upwind-biased Navier-Stokes solver is presented. Results from both a spatial-flux and a time-flux conservation approach to patching across zonal boundaries are presented. A generalized coordinate transformation with a biquadratic geometric element is used at the zonal interface in order to treat highly stretched viscous grids and arbitrarily-shaped zonal boundaries. Applications are made to the F-18 forebody-strake configuration at subsonic, high-alpha conditions. Computed surface flow patterns compare well with ground-based and flight-test results; the large effect of Reynolds number on the forebody flow-field is shown.
A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries
Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P
2003-12-15
We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.
Analysis of the initial values in split-complex backpropagation algorithm.
Yang, Sheng-Sung; Siu, Sammy; Ho, Chia-Lu
2008-09-01
When a multilayer perceptron (MLP) is trained with the split-complex backpropagation (SCBP) algorithm, one observes a relatively strong dependence of the performance on the initial values. For the effective adjustments of the weights and biases in SCBP, we propose that the range of the initial values should be greater than that of the adjustment quantities. This criterion can reduce the misadjustment of the weights and biases. Based on the this criterion, the suitable range of the initial values can be estimated. The results show that the suitable range of the initial values depends on the property of the used communication channel and the structure of the MLP (the number of layers and the number of nodes in each layer). The results are studied using the equalizer scenarios. The simulation results show that the estimated range of the initial values gives significantly improved performance. PMID:18779088
Information processing using a single dynamical node as complex system
Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.
2011-01-01
Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.
1992-01-01
Iliad is a diagnostic expert system for internal medicine. Iliad's "best information" mode is used to determine the most cost-effective findings to pursue next at any stage of a work-up. The "best information" algorithm combines an information content calculation together with a cost factor. The calculations then provide a rank-ordering of the alternative patient findings according to cost-effectiveness. The authors evaluated five information content models under two different strategies. The first, the single-frame strategy, considers findings only within the context of each individual disease frame. The second, the across-frame strategy, considers the information that a single finding could provide across several diseases. The study found that (1) a version of Shannon's information model performed the best under both strategies---this finding confirms the result of a previous independent study, (2) the across-frame strategy was preferred over the single-frame strategy. PMID:1482918
Link Prediction in Complex Networks: A Mutual Information Perspective
Tan, Fei; Xia, Yongxiang; Zhu, Boyao
2014-01-01
Topological properties of networks are widely applied to study the link-prediction problem recently. Common Neighbors, for example, is a natural yet efficient framework. Many variants of Common Neighbors have been thus proposed to further boost the discriminative resolution of candidate links. In this paper, we reexamine the role of network topology in predicting missing links from the perspective of information theory, and present a practical approach based on the mutual information of network structures. It not only can improve the prediction accuracy substantially, but also experiences reasonable computing complexity. PMID:25207920
Information processing in neural networks with the complex dynamic thresholds
NASA Astrophysics Data System (ADS)
Kirillov, S. Yu.; Nekorkin, V. I.
2016-06-01
A control mechanism of the information processing in neural networks is investigated, based on the complex dynamic threshold of the neural excitation. The threshold properties are controlled by the slowly varying synaptic current. The dynamic threshold shows high sensitivity to the rate of the synaptic current variation. It allows both to realize flexible selective tuning of the network elements and to provide nontrivial regimes of neural coding.
Combining spatial and spectral information to improve crop/weed discrimination algorithms
NASA Astrophysics Data System (ADS)
Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.
2012-01-01
Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.
A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters
NASA Astrophysics Data System (ADS)
Shanmugam, Palanisamy
2011-04-01
A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
The impact of reconstruction algorithms and time of flight information on PET/CT image quality
Suljic, Alen; Tomse, Petra; Jensterle, Luka; Skrk, Damijan
2015-01-01
Background The aim of the study was to explore the influence of various time-of-flight (TOF) and non-TOF reconstruction algorithms on positron emission tomography/computer tomography (PET/CT) image quality. Materials and methods. Measurements were performed with a triple line source phantom, consisting of capillaries with internal diameter of ∼ 1 mm and standard Jaszczak phantom. Each of the data sets was reconstructed using analytical filtered back projection (FBP) algorithm, iterative ordered subsets expectation maximization (OSEM) algorithm (4 iterations, 24 subsets) and iterative True-X algorithm incorporating a specific point spread function (PSF) correction (4 iterations, 21 subsets). Baseline OSEM (2 iterations, 8 subsets) was included for comparison. Procedures were undertaken following the National Electrical Manufacturers Association (NEMA) NU-2-2001 protocol. Results Measurement of spatial resolution in full width at half maximum (FWHM) was 5.2 mm, 4.5 mm and 2.9 mm for FBP, OSEM and True-X; and 5.1 mm, 4.5 mm and 2.9 mm for FBP+TOF, OSEM+TOF and True-X+TOF respectively. Assessment of reconstructed Jaszczak images at different concentration ratios showed that incorporation of TOF information improves cold contrast, while hot contrast only slightly, however the most prominent improvement could be seen in background variability - noise reduction. Conclusions On the basis of the results of investigation we concluded, that incorporation of TOF information in reconstruction algorithm mostly affects reduction of the background variability (levels of noise in the image), while the improvement of spatial resolution due to incorporation of TOF information is negligible. Comparison of traditional and modern reconstruction algorithms showed that analytical FBP yields comparable results in some parameter measurements, such as cold contrast and relative count error. Iterative methods show highest levels of hot contrast, when TOF and PSF corrections were applied
The algorithmic complexity of neural spike trains increases during focal seizures.
Rapp, P E; Zimmerman, I D; Vining, E P; Cohen, N; Albano, A M; Jiménez-Montaño, M A
1994-08-01
The interspike interval spike trains of spontaneously active cortical neurons can display nonrandom internal structure. The degree of nonrandom structure can be quantified and was found to decrease during focal epileptic seizures. Greater statistical discrimination between the two physiological conditions (normal vs seizure) was obtained with measurements of context-free grammar complexity than by measures of the distribution of the interspike intervals such as the mean interval, its standard deviation, skewness, or kurtosis. An examination of fixed epoch data sets showed that two factors contribute to the complexity: the firing rate and the internal structure of the spike train. However, calculations with randomly shuffled surrogates of the original data sets showed that the complexity is not completely determined by the firing rate. The sequence-sensitive structure of the spike train is a significant contributor. By combining complexity measurements with statistically related surrogate data sets, it is possible to classify neurons according to the dynamical structure of their spike trains. This classification could not have been made on the basis of conventional distribution-determined measures. Computations with more sophisticated kinds of surrogate data show that the structure observed using complexity measures cannot be attributed to linearly correlated noise or to linearly correlated noise transformed by a static monotonic nonlinearity. The patterns in spike trains appear to reflect genuine nonlinear structure. The limitations of these results are also discussed. The results presented in this article do not, of themselves, establish the presence of a fine-structure encoding of neural information. PMID:8046447
Daneshmand, Hadi; Gomez-Rodriguez, Manuel; Song, Le; Schölkopf, Bernhard
2015-01-01
Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees? Despite the increasing availability of cascade-data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuous-time diffusion models using an ℓ1-regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe O(d3 log N) cascades, where d is the maximum number of parents of a node and N is the total number of nodes. Moreover, we develop a simple and efficient soft-thresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice. PMID:25932466
Neural network and genetic algorithm technology in data mining of manufacturing quality information
NASA Astrophysics Data System (ADS)
Song, Limei; Qu, Xing-Hua; Ye, Shenghua
2002-03-01
Data Mining of Manufacturing Quality Information (MQI) is the key technology in Quality Lead Control. Of all the data mining methods, Neural Network and Genetic Algorithm is widely used for their strong advantages, such as non-linear, collateral, veracity etc. But if you singly use them, there will be some limitations preventing your research, such as convergence slowly, searching blindness etc. This paper combines their merits and use Genetic BP Algorithm in Data Mining of MQI. It has been successfully used in the key project of Natural Science Foundation of China (NSFC) - Quality Control and Zero-defect Engineering (Project No. 59735120).
Information Driven Self-Organization of Complex Robotic Behaviors
Martius, Georg; Der, Ralf; Ay, Nihat
2013-01-01
Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well. PMID:23723979
Information driven self-organization of complex robotic behaviors.
Martius, Georg; Der, Ralf; Ay, Nihat
2013-01-01
Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well. PMID:23723979
Integrated computational and conceptual solutions for complex environmental information management
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2016-06-01
This paper presents the recent results of the integration of computational and conceptual solutions for the complex case of environmental information management. The solution for the major goal of creating and developing long-term multi-disciplinary knowledge resources and conceptual and computational support was achieved by implementing and integrating key components. The key components are long-term knowledge resources providing required structures for universal knowledge creation, documentation, and preservation, universal multi-disciplinary and multi-lingual conceptual knowledge and classification, especially, references to Universal Decimal Classification (UDC), sustainable workflows for environmental information management, and computational support for dynamical use, processing, and advanced scientific computing with Integrated Information and Computing System (IICS) components and High End Computing (HEC) resources.
Encoding techniques for complex information structures in connectionist systems
NASA Technical Reports Server (NTRS)
Barnden, John; Srinivas, Kankanahalli
1990-01-01
Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro
2010-12-01
The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.
Statistical physics of networks, information and complex systems
Ecke, Robert E
2009-01-01
In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.
SNP Markers as Additional Information to Resolve Complex Kinship Cases
Pontes, M. Lurdes; Fondevila, Manuel; Laréu, Maria Victoria; Medeiros, Rui
2015-01-01
Summary Background DNA profiling with sets of highly polymorphic autosomal short tandem repeat (STR) markers has been applied in various aspects of human identification in forensic casework for nearly 20 years. However, in some cases of complex kinship investigation, the information provided by the conventionally used STR markers is not enough, often resulting in low likelihood ratio (LR) calculations. In these cases, it becomes necessary to increment the number of loci under analysis to reach adequate LRs. Recently, it has been proposed that single nucleotide polymorphisms (SNPs) could be used as a supportive tool to STR typing, eventually even replacing the methods/markers now employed. Methods In this work, we describe the results obtained in 7 revised complex paternity cases when applying a battery of STRs, as well as 52 human identification SNPs (SNPforID 52plex identification panel) using a SNaPshot methodology followed by capillary electrophoresis. Results Our results show that the analysis of SNPs, as complement to STR typing in forensic casework applications, would at least increase by a factor of 4 total PI values and correspondent Essen-Möller's W value. Conclusions We demonstrated that SNP genotyping could be a key complement to STR information in challenging casework of disputed paternity, such as close relative individualization or complex pedigrees subject to endogamous relations. PMID:26733770
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.
2010-07-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.
Bae, Juhee; Watson, Benjamin
2014-12-01
In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator. PMID:26356911
FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks.
Yang, Jing; Chen, Limin; Zhang, Jianpei
2015-01-01
It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification. PMID:26090857
Information Geometry of Complex Hamiltonians and Exceptional Points
NASA Astrophysics Data System (ADS)
Brody, Dorje; Graefe, Eva-Maria
2013-08-01
Information geometry provides a tool to systematically investigate parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where dynamical properties of the system change abruptly. In the vicinities of the transition points, the state of the system becomes highly sensitive to the changes of the parameters in the Hamiltonian. The parameter sensitivity can then be measured in terms of the Fisher-Rao metric and the associated curvature of the parameter-space manifold. A general scheme for the geometric study of parameter-space manifolds of eigenstates of complex Hamiltonians is outlined here, leading to generic expressions for the metric.
Tactile information processing in the trigeminal complex of the rat
NASA Astrophysics Data System (ADS)
Pavlov, Alexey N.; Tupitsyn, Anatoly N.; Makarov, Valery A.; Panetsos, Fivos; Moreno, Angel; Garcia-Gonzalez, Victor; Sanchez-Jimenez, Abel
2007-02-01
We study mechanisms of information processing in the principalis (Pr5), oralis (Sp5o) and interpolaris (Sp5i) nuclei of the trigeminal sensory complex of the rat under whisker stimulation by short air puffs. After the standard electrophysiological description of the neural spiking activity we apply a novel wavelet based method quantifying the structural stability of firing patterns evoked by a periodic whisker stimulation. We show that the response stability depends on the puff duration delivered to the vibrissae and differs among the analyzed nuclei. Pr5 and Sp5i exhibit the maximal stability to an intermediate stimulus duration, whereas Sp5o shows "preference" for short stimuli.
Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido
2015-12-01
The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks. PMID:26764762