Science.gov

Sample records for algorithmic information complexity

  1. Thermodynamic cost of computation, algorithmic complexity and the information metric

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  2. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  3. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  4. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  5. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  6. Communication complexity and information complexity

    NASA Astrophysics Data System (ADS)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information

  7. Algorithmic complexity of thermal ratchet motion

    NASA Astrophysics Data System (ADS)

    Sanchez, J. R.; Family, F.; Arizmendi, C. M.

    1998-12-01

    Thermal ratchets are Brownian models where time-correlated fluctuations coming from a nonequilibrium bath interacting with a spatial asymmetry are sufficient conditions to give rise to transport. The nonequilibrium bath acts as a source of negentropy (physical information). In order to quantitate the transfer of information that occurs in thermal ratchet motion, the Kolmogorov information entropy or algorithmic complexity is investigated. The complexity is measured in terms of the average number of bits per time unit necessary to specify the sequence generated by the system.

  8. A new algorithm for essential proteins identification based on the integration of protein complex co-expression information and edge clustering coefficient.

    PubMed

    Luo, Jiawei; Wu, Juan

    2015-01-01

    Essential proteins provide valuable information for the development of biology and medical research from the system level. The accuracy of topological centrality only based methods is deeply affected by noise in the network. Therefore, exploring efficient methods for identifying essential proteins would be of great value. Using biological features to identify essential proteins is efficient in reducing the noise in PPI network. In this paper, based on the consideration that essential proteins evolve slowly and play a central role within a network, a new algorithm, named CED, is proposed. CED mainly employs gene expression level, protein complex information and edge clustering coefficient to predict essential proteins. The performance of CED is validated based on the yeast Protein-Protein Interaction (PPI) network obtained from DIP database and BioGRID database. The prediction accuracy of CED outperforms other seven algorithms when applied to the two databases. PMID:26510286

  9. A novel complex valued cuckoo search algorithm.

    PubMed

    Zhou, Yongquan; Zheng, Hongqing

    2013-01-01

    To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified. PMID:23766699

  10. A Novel Complex Valued Cuckoo Search Algorithm

    PubMed Central

    Zhou, Yongquan; Zheng, Hongqing

    2013-01-01

    To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified. PMID:23766699

  11. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  12. Predicting complex mineral structures using genetic algorithms.

    PubMed

    Mohn, Chris E; Kob, Walter

    2015-10-28

    We show that symmetry-adapted genetic algorithms are capable of finding the ground state of a range of complex crystalline phases including layered- and incommensurate super-structures. This opens the way for the atomistic prediction of complex crystal structures of functional materials and mineral phases. PMID:26441052

  13. Pinning impulsive control algorithms for complex network

    SciTech Connect

    Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo

    2014-03-15

    In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.

  14. Unifying Complexity and Information

    NASA Astrophysics Data System (ADS)

    Ke, Da-Guan

    2013-04-01

    Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.

  15. Advanced algorithms for information science

    SciTech Connect

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  16. Information Complexity and Biology

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Bignone, Franco A.; Cecconi, Fabio; Politi, Antonio

    Kolmogorov contributed directly to Biology in essentially three problems: the analysis of population dynamics (Lotka-Volterra equations), the reaction-diffusion formulation of gene spreading (FKPP equation), and some discussions about Mendel's laws. However, the widely recognized importance of his contribution arises from his work on algorithmic complexity. In fact, the limited direct intervention in Biology reflects the generally slow growth of interest of mathematicians towards biological issues. From the early work of Vito Volterra on species competition, to the slow growth of dynamical systems theory, contributions to the study of matter and the physiology of the nervous system, the first 50-60 years have witnessed important contributions, but as scattered pieces apparently uncorrelated, and in branches often far away from Biology. Up to the 40' it is hard to see the initial loose build up of a convergence, for those theories that will become mainstream research by the end of the century, and connected by the study of biological systems per-se.

  17. Information communication on complex networks

    NASA Astrophysics Data System (ADS)

    Igarashi, Akito; Kawamoto, Hiroki; Maruyama, Takahiro; Morioka, Atsushi; Naganuma, Yuki

    2013-02-01

    Since communication networks such as the Internet, which is regarded as a complex network, have recently become a huge scale and a lot of data pass through them, the improvement of packet routing strategies for transport is one of the most significant themes in the study of computer networks. It is especially important to find routing strategies which can bear as many traffic as possible without congestion in complex networks. First, using neural networks, we introduce a strategy for packet routing on complex networks, where path lengths and queue lengths in nodes are taken into account within a framework of statistical physics. Secondly, instead of using shortest paths, we propose efficient paths which avoid hubs, nodes with a great many degrees, on scale-free networks with a weight of each node. We improve the heuristic algorithm proposed by Danila et. al. which optimizes step by step routing properties on congestion by using the information of betweenness, the probability of paths passing through a node in all optimal paths which are defined according to a rule, and mitigates the congestion. We confirm the new heuristic algorithm which balances traffic on networks by achieving minimization of the maximum betweenness in much smaller number of iteration steps. Finally, We model virus spreading and data transfer on peer-to-peer (P2P) networks. Using mean-field approximation, we obtain an analytical formulation and emulate virus spreading on the network and compare the results with those of simulation. Moreover, we investigate the mitigation of information traffic congestion in the P2P networks.

  18. Advanced Algorithms for Local Routing Strategy on Complex Networks.

    PubMed

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  19. Advanced Algorithms for Local Routing Strategy on Complex Networks

    PubMed Central

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  20. Query-answering algorithms for information agents

    SciTech Connect

    Levy, A.Y.; Rajaraman, A.; Ordille, J.J.

    1996-12-31

    We describe the architecture and query-answering algorithms used in the Information Manifold, an implemented information gathering system that provides uniform access to structured information sources on the World-Wide Web. Our architecture provides an expressive language for describing information sources, which makes it easy to add new sources and to model the fine-grained distinctions between their contents. The query-answering algorithm guarantees that the descriptions of the sources are exploited to access only sources that are relevant to a given query. Accessing only relevant sources is crucial to scale up such a system to large numbers of sources. In addition, our algorithm can exploit run-time information to further prune information sources and to reduce the cost of query planning.

  1. A fast DFT algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.

  2. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  3. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  4. Accessing complexity from genome information

    NASA Astrophysics Data System (ADS)

    Tenreiro Machado, J. A.

    2012-06-01

    This paper studies the information content of the chromosomes of 24 species. In a first phase, a scheme inspired in dynamical system state space representation is developed. For each chromosome the state space dynamical evolution is shed into a two dimensional chart. The plots are then analyzed and characterized in the perspective of fractal dimension. This information is integrated in two measures of the species' complexity addressing its average and variability. The results are in close accordance with phylogenetics pointing quantitative aspects of the species' genomic complexity.

  5. Improved motion information-based infrared dim target tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lei, Liu; Zhijian, Huang

    2014-11-01

    Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.

  6. Entropy, complexity, and spatial information

    NASA Astrophysics Data System (ADS)

    Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril

    2014-10-01

    We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.

  7. Entropy, complexity, and spatial information

    NASA Astrophysics Data System (ADS)

    Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril

    2014-09-01

    We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.

  8. A Simple Quality Triangulation Algorithm for Complex Geometries

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a new and simple algorithm for quality triangulation in complex geometries. The proposed algorithm is based on an initial equilateral triangle mesh covering the whole domain. The mesh nodes close to the boundary edges satisfy the so-called non-encroaching criterion: the distance ...

  9. Evader Interdiction: Algorithms, Complexity and Collateral Damage

    PubMed Central

    Johnson, Matthew P.; Gutfraind, Alexander; Ahmadizadeh, Kiyan

    2013-01-01

    In network interdiction problems, evaders (e.g., hostile agents or data packets) are moving through a network toward targets and we wish to choose locations for sensors in order to intercept the evaders. The evaders might follow deterministic routes or Markov chains, or they may be reactive, i.e., able to change their routes in order to avoid the sensors. The challenge in such problems is to choose sensor locations economically, balancing interdiction gains with costs, including the inconvenience sensors inflict upon innocent travelers. We study the objectives of (1) maximizing the number of evaders captured when limited by a budget on sensing cost and, (2) capturing all evaders as cheaply as possible. We give algorithms for optimal sensor placement in several classes of special graphs and hardness and approximation results for general graphs, including evaders who are deterministic, Markov chain-based, reactive and unreactive. A similar-sounding but fundamentally different problem setting was posed by Glazer and Rubinstein where both evaders and innocent travelers are reactive. We again give optimal algorithms for special cases and hardness and approximation results on general graphs. PMID:25332514

  10. Algorithmic complexity of real financial markets

    NASA Astrophysics Data System (ADS)

    Mansilla, R.

    2001-12-01

    A new approach to the understanding of complex behavior of financial markets index using tools from thermodynamics and statistical physics is developed. Physical complexity, a quantity rooted in the Kolmogorov-Chaitin theory is applied to binary sequences built up from real time series of financial markets indexes. The study is based on NASDAQ and Mexican IPC data. Different behaviors of this quantity are shown when applied to the intervals of series placed before crashes and to intervals when no financial turbulence is observed. The connection between our results and the efficient market hypothesis is discussed.

  11. Resampling Algorithms for Particle Filters: A Computational Complexity Perspective

    NASA Astrophysics Data System (ADS)

    Bolić, Miodrag; Djurić, Petar M.; Hong, Sangjin

    2004-12-01

    Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.

  12. Adaptive clustering algorithm for community detection in complex networks.

    PubMed

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501

  13. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  14. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  15. Information Theory, Inference and Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Mackay, David J. C.

    2003-10-01

    Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

  16. Information Horizons in Complex Networks

    NASA Astrophysics Data System (ADS)

    Sneppen, Kim

    2005-03-01

    We investigate how the structure constrain specific communication in social-, man-made and biological networks. We find that human networks of governance and collaboration are predictable on teat-a-teat level, reflecting well defined pathways, but globally inefficient (1). In contrast, the Internet tends to have better overall communication abilities, more alternative pathways, and is therefore more robust. Between these extremes are the molecular network of living organisms. Further, for most real world networks we find that communication ability is favored by topology on small distances, but disfavored at larger distances (2,3,4). We discuss the topological implications in terms of modularity and the positioning of hubs in the networks (5,6). Finally we introduce some simple models which demonstarte how communication may shape the structure of in particular man made networks (7,8). 1) K. Sneppen, A. Trusina, M. Rosvall (2004). Hide and seek on complex networks [cond-mat/0407055] 2) M. Rosvall, A. Trusina, P. Minnhagen and K. Sneppen (2004). Networks and Cities: An Information Perspective [cond-mat/0407054]. In PRL. 3) A. Trusina, M. Rosvall, K. Sneppen (2004). Information Horizons in Networks. [cond-mat/0412064] 4) M. Rosvall, P. Minnhagen, K. Sneppen (2004). Navigating Networks with Limited Information. [cond-mat/0412051] 5) S. Maslov and K. Sneppen (2002). Specificity and stability in topology of protein networks Science 296, 910-913 [cond-mat/0205380]. 6) A. Trusina, S. Maslov, P. Minnhagen, K. Sneppen Hierarchy Measures in Complex Networks. Phys. Rev. Lett. 92, 178702 [cond-mat/0308339]. 7) M. Rosvall and K. Sneppen (2003). Modeling Dynamics of Information Networks. Phys. Rev. Lett. 91, 178701 [cond-mat/0308399]. 8) B-J. Kim, A. Trusina, P. Minnhagen, K. Sneppen (2003). Self Organized Scale-Free Networks from Merging and Regeneration. nlin.AO/0403006. In European Journal of Physics.

  17. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  18. Distributed learning automata-based algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza

    2016-03-01

    Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.

  19. Rate control algorithm based on frame complexity estimation for MVC

    NASA Astrophysics Data System (ADS)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  20. Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm

    SciTech Connect

    Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen

    2006-12-01

    Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimization problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.

  1. Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng

    2016-02-01

    The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.

  2. A New Algorithm to Optimize Maximal Information Coefficient.

    PubMed

    Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming

    2016-01-01

    The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001

  3. A New Algorithm to Optimize Maximal Information Coefficient

    PubMed Central

    Luo, Feng; Yuan, Zheming

    2016-01-01

    The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001

  4. A novel hybrid color image encryption algorithm using two complex chaotic systems

    NASA Astrophysics Data System (ADS)

    Wang, Leyuan; Song, Hongjun; Liu, Ping

    2016-02-01

    Based on complex Chen and complex Lorenz systems, a novel color image encryption algorithm is proposed. The larger chaotic ranges and more complex behaviors of complex chaotic systems, which compared with real chaotic systems could additionally enhance the security and enlarge key space of color image encryption. The encryption algorithm is comprised of three step processes. In the permutation process, the pixels of plain image are scrambled via two-dimensional and one-dimensional permutation processes among RGB channels individually. In the diffusion process, the exclusive-or (XOR for short) operation is employed to conceal pixels information. Finally, the mixing RGB channels are used to achieve a multilevel encryption. The security analysis and experimental simulations demonstrate that the proposed algorithm is large enough to resist the brute-force attack and has excellent encryption performance.

  5. Data bank homology search algorithm with linear computation complexity.

    PubMed

    Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A

    1994-06-01

    A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given. PMID:7922689

  6. Exploring a new best information algorithm for Iliad.

    PubMed Central

    Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.

    1991-01-01

    Iliad is a diagnostic expert system for internal medicine. One important feature that Iliad offers is the ability to analyze a particular patient case and to determine the most cost-effective method for pursuing the work-up. Iliad's current "best information" algorithm has not been previously validated and compared to other potential algorithms. Therefore, this paper presents a comparison of four new algorithms to the current algorithm. The basis for this comparison was eighteen "vignette" cases derived from real patient cases from the University of Utah Medical Center. The results indicated that the current algorithm can be significantly improved. More promising algorithms are suggested for future investigation. PMID:1807677

  7. Supergravity, complex parameters and the Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Demiański-Janis-Newman (DJN) algorithm is an original solution generating technique. For a long time it has been limited to producing rotating solutions, restricted to the case of a metric and real scalar fields, despite the fact that Demiański extended it to include more parameters such as a NUT charge. Recently two independent prescriptions have been given for extending the algorithm to gauge fields and thus electrically charged configurations. In this paper we aim to end setting up the algorithm by providing a missing but important piece, which is how the transformation is applied to complex scalar fields. We illustrate our proposal through several examples taken from N = 2 supergravity, including the stationary BPS solutions from Behrndt et al and Sen's axion-dilaton rotating black hole. Moreover we discuss solutions that include pairs of complex parameters, such as the mass and the NUT charge, or the electric and magnetic charges, and we explain how to perform the algorithm in this context (with the example of Kerr-Newman-Taub-NUT and dyonic Kerr-Newman black holes). The final formulation of the DJN algorithm can possibly handle solutions with five of the six Plebański-Demiański parameters along with any type of bosonic fields with spin less than two (exemplified with the stationary Israel-Wilson-Perjes solutions). This provides all the necessary tools for applications to general matter-coupled gravity and to (gauged) supergravity.

  8. Information dynamics algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  9. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  10. Information filtering via weighted heat conduction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  11. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  12. A Modified Tactile Brush Algorithm for Complex Touch Gestures

    SciTech Connect

    Ragan, Eric

    2015-01-01

    Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.

  13. Tsallis information dimension of complex networks

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Luo, Chuanhai; Li, Meizhu; Deng, Yong; Mahadevan, Sankaran

    2015-02-01

    The fractal and self-similarity properties are revealed in many complex networks. The information dimension is a useful method to describe the fractal and self-similarity properties of the complex networks. In order to show the influence of different parts in the complex networks to the information dimension, we have proposed a new information dimension based on the Tsallis entropy namely the Tsallis information dimension. The proposed information dimension is changed according to the scale which is described by the nonextensivity parameter q, and it is inverse with the nonextensivity parameter q. The existing information dimension is a special case of the Tsallis information dimension when q = 1. The Tsallis information dimension is a generalized information dimension of the complex networks.

  14. Forest Height Retrieval Algorithm Using a Complex Visibility Function Approach

    NASA Astrophysics Data System (ADS)

    Chu, T.; Zebker, H. A.

    2011-12-01

    Vegetation structure and biomass on earth's terrestrial surface are critical parameters that influences global carbon cycle, habitat, climate, and resources of economic value. Space-borne and air-borne remote sensing instruments are the most practical means of obtaining information such as tree height and biomass on a large scale. SAR (Synthetic aperture radars) especially InSAR (Interferometric SAR) has been utilized in the recent years to quantify vegetation parameters such as height and biomass. However methods used to quantify global vegetation has yet to produce accurate results. It is the goal of this study to develop a signal-processing algorithm through simulation to determine vegetation heights that would lead to accurate height and biomass retrievals. A standard SAR image represents a projection of the 3D distributed backscatter onto a 2D plane. InSAR is capable of determining topography or the height of vegetation. Vegetation height is determined from the mean scattering phase center of all scatterers within a resolution cell. InSAR is capable of generating a 3D height surface, but the distribution of scatters in height is under-determined and cannot be resolved by a single-baseline measurement. One interferogram therefore is insufficient to uniquely determine vertical characteristics of even a simple 3D forest. An aperture synthesis technique in the height or vertical dimension would enable improved resolution capability to distinguish scatterers of different location in the vertical dimension. Repeat pass observations allow us differential interferometry to populate the frequency domain from which we can use the Fourier transform relation to get to the brightness or backscatter domain. Ryle and Hewish first introduced this technique of aperture synthesis in the 1960's for large radio telescope arrays. This technique would allow us to focus the antenna beam pattern in the vertical direction and increase vertical resolving power. It enable us to

  15. Information content of ozone retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.

    1989-01-01

    The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.

  16. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  17. A Generative Statistical Algorithm for Automatic Detection of Complex Postures

    PubMed Central

    Amit, Yali; Biron, David

    2015-01-01

    This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258

  18. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    SciTech Connect

    Miller, Gregory H.; Forest, Gregory

    2011-12-22

    We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  19. Current Algorithms for the Diagnosis of wide QRS Complex Tachycardias

    PubMed Central

    Vereckei, András

    2014-01-01

    The differential diagnosis of a regular, monomorphic wide QRS complex tachycardia (WCT) mechanism represents a great diagnostic dilemma commonly encountered by the practicing physician, which has important implications for acute arrhythmia management, further work-up, prognosis and chronic management as well. This comprehensive review discusses the causes and differential diagnosis of WCT, and since the ECG remains the cornerstone of WCT differential diagnosis, focuses on the application and diagnostic value of different ECG criteria and algorithms in this setting and also provides a practical clinical approach to patients with WCTs. PMID:24827795

  20. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  1. Toward the quality evaluation of complex information systems

    NASA Astrophysics Data System (ADS)

    Todoran, Ion-George; Lecornu, Laurent; Khenchaf, Ali; Le Caillec, Jean-Marc

    2014-06-01

    Recent technological evolutions and developments allow gathering huge amounts of data stemmed from different types of sensors, social networks, intelligence reports, distributed databases, etc. Data quantity and heterogeneity imposed the evolution necessity of the information systems. Nowadays the information systems are based on complex information processing techniques at multiple processing stages. Unfortunately, possessing large quantities of data and being able to implement complex algorithms do not guarantee that the extracted information will be of good quality. The decision-makers need good quality information in the process of decision-making. We insist that for a decision-maker the information and the information quality, viewed as a meta-information, are of great importance. A system not proposing to its user the information quality is in danger of not being correctly used or in more dramatic cases not to be used at all. In literature, especially in organizations management and in information retrieval, can be found some information quality evaluation methodologies. But none of these do not allow the information quality evaluation in complex and changing environments. We propose a new information quality methodology capable of estimating the information quality dynamically with data changes and/or with the information system inner changes. Our methodology is able to instantaneously update the system's output quality. For capturing the information quality changes through the system, we introduce the notion of quality transfer function. It is equivalent to the signal processing transfer function but working on the quality level. The quality transfer function describes the influence of a processing module over the information quality. We also present two different views over the notion of information quality: a global one, characterizing the entire system and a local one, for each processing module.

  2. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    ERIC Educational Resources Information Center

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  3. Genetic algorithms applied to nonlinear and complex domains

    SciTech Connect

    Barash, D; Woodin, A E

    1999-06-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.

  4. Genetic algorithms applied to nonlinear and complex domains

    SciTech Connect

    Barash, D; Woodin, A E

    1999-06-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a final result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.

  5. Speckle-reduction algorithm for ultrasound images in complex wavelet domain using genetic algorithm-based mixture model.

    PubMed

    Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-05-20

    Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures. PMID:27411128

  6. The guitar chord-generating algorithm based on complex network

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  7. An Algorithm for Inferring Complex Haplotypes in a Region of Copy-Number Variation

    PubMed Central

    Kato, Mamoru; Nakamura, Yusuke; Tsunoda, Tatsuhiko

    2008-01-01

    Recent studies have extensively examined the large-scale genetic variants in the human genome known as copy-number variations (CNVs), and the universality of CNVs in normal individuals, along with their functional importance, has been increasingly recognized. However, the absence of a method to accurately infer alleles or haplotypes within a CNV region from high-throughput experimental data hampers the finer analyses of CNV properties and applications to disease-association studies. Here we developed an algorithm to infer complex haplotypes within a CNV region by using data obtained from high-throughput experimental platforms. We applied this algorithm to experimental data and estimated the population frequencies of haplotypes that can yield information on both sequences and numbers of DNA copies. These results suggested that the analysis of such complex haplotypes is essential for accurately detecting genetic differences within a CNV region between population groups. PMID:18639202

  8. Hidden Behavior Prediction of Complex Systems Based on Hybrid Information.

    PubMed

    Zhou, Zhi-Jie; Hu, Chang-Hua; Zhang, Bang-Cheng; Xu, Dong-Ling; Chen, Yu-Wang

    2013-04-01

    It is important to predict both observable and hidden behaviors in complex engineering systems. However, compared with observable behavior, it is often difficult to establish a forecasting model for hidden behavior. The existing methods for predicting the hidden behavior cannot effectively and simultaneously use the hybrid information with uncertainties that include qualitative knowledge and quantitative data. Although belief rule base (BRB) has been employed to predict the observable behavior using the hybrid information with uncertainties, it is still not applicable to predict the hidden behavior directly. As such, in this paper, a new BRB-based model is proposed to predict the hidden behavior. In the proposed BRB-based model, the initial values of parameters are usually given by experts, thus some of them may not be accurate, which can lead to inaccurate prediction results. In order to solve the problem, a parameter estimation algorithm for training the parameters of the forecasting model is further proposed on the basis of maximum likelihood algorithm. Using the hybrid information with uncertainties, the proposed model can combine together with the parameter estimation algorithm and improve the forecasting precision in an integrated and effective manner. A case study is conducted to demonstrate the capability and potential applications of the proposed forecasting model with the parameter estimation algorithm. PMID:22907969

  9. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-bao; Min, Jiang

    2012-01-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  10. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-Bao; Min, Jiang

    2011-12-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  11. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.

    2010-09-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.

  12. Local algorithm for computing complex travel time based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing

    2016-04-01

    The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.

  13. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Pitts, W. Karl; Seifert, Allen; Misner, Alex C.; Woodring, Mitchell L.; Myjak, Mitchell J.

    2012-01-11

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  14. Algorithmic complexity of growth hormone release in humans

    SciTech Connect

    Prank, K.; Wagner, M.; Brabant, G.

    1996-12-31

    Most hormones are secreted in an pulsatile rather than in a constant manner. This temporal pattern of pulsatile hormone release plays an important role in the regulation of cellular function and structure. In healthy humans growth hormone (GH) secretion is characterized by distinct pulses whereas patients bearing a GH producing tumor accompanied with excessive secretion (acromegaly) exhibit a highly irregular pattern of GH release. It has been hypothesized that this highly disorderly pattern of GH release in acromegaly arises from random events in the GH-producing tumor under decreased normal control of GH secretion. Using a context-free grammar complexity measure (algorithmic complexity) in conjunction with random surrogate data sets we demonstrate that the temporal pattern of GH release in acromegaly is not significantly different from a variety of stochastic processes. In contrast, normal subjects clearly exhibit deterministic structure in their temporal patterns of GH secretion. Our results support the hypothesis that GH release in acromegaly is due to random events in the GH-producing tumorous cells which might become independent from hypothalamic regulation. 17 refs., 1 fig., 2 tabs.

  15. Approach to complex upper extremity injury: an algorithm.

    PubMed

    Ng, Zhi Yang; Askari, Morad; Chim, Harvey

    2015-02-01

    Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098

  16. Recording information on protein complexes in an information management system

    PubMed Central

    Savitsky, Marc; Diprose, Jonathan M.; Morris, Chris; Griffiths, Susanne L.; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S.; Blake, Richard; Stuart, David I.; Esnouf, Robert M.

    2011-01-01

    The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein–protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682

  17. Pediatric Medical Complexity Algorithm: A New Method to Stratify Children by Medical Complexity

    PubMed Central

    Cawthon, Mary Lawrence; Stanford, Susan; Popalisky, Jean; Lyons, Dorothy; Woodcox, Peter; Hood, Margaret; Chen, Alex Y.; Mangione-Smith, Rita

    2014-01-01

    OBJECTIVES: The goal of this study was to develop an algorithm based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), codes for classifying children with chronic disease (CD) according to level of medical complexity and to assess the algorithm’s sensitivity and specificity. METHODS: A retrospective observational study was conducted among 700 children insured by Washington State Medicaid with ≥1 Seattle Children’s Hospital emergency department and/or inpatient encounter in 2010. The gold standard population included 350 children with complex chronic disease (C-CD), 100 with noncomplex chronic disease (NC-CD), and 250 without CD. An existing ICD-9-CM–based algorithm called the Chronic Disability Payment System was modified to develop a new algorithm called the Pediatric Medical Complexity Algorithm (PMCA). The sensitivity and specificity of PMCA were assessed. RESULTS: Using hospital discharge data, PMCA’s sensitivity for correctly classifying children was 84% for C-CD, 41% for NC-CD, and 96% for those without CD. Using Medicaid claims data, PMCA’s sensitivity was 89% for C-CD, 45% for NC-CD, and 80% for those without CD. Specificity was 90% to 92% in hospital discharge data and 85% to 91% in Medicaid claims data for all 3 groups. CONCLUSIONS: PMCA identified children with C-CD (who have accessed tertiary hospital care) with good sensitivity and good to excellent specificity when applied to hospital discharge or Medicaid claims data. PMCA may be useful for targeting resources such as care coordination to children with C-CD. PMID:24819580

  18. Dynamic information routing in complex networks

    PubMed Central

    Kirst, Christoph; Timme, Marc; Battaglia, Demian

    2016-01-01

    Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257

  19. Dynamic information routing in complex networks.

    PubMed

    Kirst, Christoph; Timme, Marc; Battaglia, Demian

    2016-01-01

    Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257

  20. Dynamic information routing in complex networks

    NASA Astrophysics Data System (ADS)

    Kirst, Christoph; Timme, Marc; Battaglia, Demian

    2016-04-01

    Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function.

  1. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

    ERIC Educational Resources Information Center

    Vrajitoru, Dana

    1998-01-01

    In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

  2. A Motion Detection Algorithm Using Local Phase Information

    PubMed Central

    Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  3. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  4. Exploiting Complexity Information for Brain Activation Detection

    PubMed Central

    Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui

    2016-01-01

    We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838

  5. Exploiting Complexity Information for Brain Activation Detection.

    PubMed

    Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui

    2016-01-01

    We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838

  6. Decision making algorithm for development strategy of information systems

    NASA Astrophysics Data System (ADS)

    Derman, Galyna Y.; Nikitenko, Olena D.; Kotyra, Andrzej; Bazarova, Madina; Kassymkhanova, Dana

    2015-12-01

    The paper presents algorithm of decision making for development strategy of information systems. The process of development is planned taking into account the internal and external factors of the enterprise which affect the prospects of development of both the information system and the whole enterprise. The initial state of the system must be taken into account. The total risk is the criterion for selecting the strategy. The risk is calculated using statistical and fuzzy data of system's parameters. These data are summarized by means of the function of uncertainty. The software for the realization of the algorithm of decision making on choosing the development strategy of information system is developed and created in this paper.

  7. An Iterative Decoding Algorithm for Fusion of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Shivappa, Shankar T.; Rao, Bhaskar D.; Trivedi, Mohan M.

    2007-12-01

    Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.

  8. Presentation Media, Information Complexity, and Learning Outcomes

    ERIC Educational Resources Information Center

    Andres, Hayward P.; Petersen, Candice

    2002-01-01

    Cognitive processing limitations restrict the number of complex information items held and processed in human working memory. To overcome such limitations, a verbal working memory channel is used to construct an if-then proposition representation of facts and a visual working memory channel is used to construct a visual imagery of geometric…

  9. Darwinian demons, evolutionary complexity, and information maximization.

    PubMed

    Krakauer, David C

    2011-09-01

    Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity--the organismal contribution to the construction of selection pressures--might in principle lead to its increase, bounded by thermodynamic free energy required for error correction. PMID:21974673

  10. Retaining local image information in gamut mapping algorithms.

    PubMed

    Zolliker, Peter; Simon, Klaus

    2007-03-01

    Our topic is the potential of combining global gamut mapping with spatial methods to retain the percepted local image information in gamut mapping algorithms. The main goal is to recover the original local contrast between neighboring pixels in addition to the usual optimization of preserving lightness, saturation, and global contrast. Special emphasis is placed on avoiding artifacts introduced by the gamut mapping algorithm itself. We present an unsharp masking technique based on an edge-preserving smoothing algorithm allowing to avoid halo artifacts. The good performance of the presented approach is verified by a psycho-visual experiment using newspaper printing as a representative of a small destination gamut application. Furthermore, the improved mapping properties are documented with local mapping histograms. PMID:17357727

  11. Artificial Intelligence Methods: Choice of algorithms, their complexity, and appropriateness within the context of hydrology and water resources. (Invited)

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Pande, S.

    2009-12-01

    Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure

  12. Artificial Bee Colony Algorithm Based on Information Learning.

    PubMed

    Gao, Wei-Feng; Huang, Ling-Ling; Liu, San-Yang; Dai, Cai

    2015-12-01

    Inspired by the fact that the division of labor and cooperation play extremely important roles in the human history development, this paper develops a novel artificial bee colony algorithm based on information learning (ILABC, for short). In ILABC, at each generation, the whole population is divided into several subpopulations by the clustering partition and the size of subpopulation is dynamically adjusted based on the last search experience, which results in a clear division of labor. Furthermore, the two search mechanisms are designed to facilitate the exchange of information in each subpopulation and between different subpopulations, respectively, which acts as the cooperation. Finally, the comparison results on a number of benchmark functions demonstrate that the proposed method performs competitively and effectively when compared to the selected state-of-the-art algorithms. PMID:25594992

  13. MIRA: mutual information-based reporter algorithm for metabolic networks

    PubMed Central

    Cicek, A. Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin

    2014-01-01

    Motivation: Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA’s results are biologically sound, empirically significant and more reliable than RA. Results: We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. Availability and

  14. Information, complexity and efficiency: The automobile model

    SciTech Connect

    Allenby, B. |

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  15. Hyperbolic mapping of complex networks based on community information

    NASA Astrophysics Data System (ADS)

    Wang, Zuxi; Li, Qingguang; Jin, Fengdong; Xiong, Wei; Wu, Yao

    2016-08-01

    To improve the hyperbolic mapping methods both in terms of accuracy and running time, a novel mapping method called Community and Hyperbolic Mapping (CHM) is proposed based on community information in this paper. Firstly, an index called Community Intimacy (CI) is presented to measure the adjacency relationship between the communities, based on which a community ordering algorithm is introduced. According to the proposed Community-Sector hypothesis, which supposes that most nodes of one community gather in a same sector in hyperbolic space, CHM maps the ordered communities into hyperbolic space, and then the angular coordinates of nodes are randomly initialized within the sector that they belong to. Therefore, all the network nodes are so far mapped to hyperbolic space, and then the initialized angular coordinates can be optimized by employing the information of all nodes, which can greatly improve the algorithm precision. By applying the proposed dual-layer angle sampling method in the optimization procedure, CHM reduces the time complexity to O(n2) . The experiments show that our algorithm outperforms the state-of-the-art methods.

  16. Maximizing information exchange between complex networks

    NASA Astrophysics Data System (ADS)

    West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo

    2008-10-01

    modern research overarching all of the traditional scientific disciplines. The transportation networks of planes, highways and railroads; the economic networks of global finance and stock markets; the social networks of terrorism, governments, businesses and churches; the physical networks of telephones, the Internet, earthquakes and global warming and the biological networks of gene regulation, the human body, clusters of neurons and food webs, share a number of apparently universal properties as the networks become increasingly complex. Ubiquitous aspects of such complex networks are the appearance of non-stationary and non-ergodic statistical processes and inverse power-law statistical distributions. Herein we review the traditional dynamical and phase-space methods for modeling such networks as their complexity increases and focus on the limitations of these procedures in explaining complex networks. Of course we will not be able to review the entire nascent field of network science, so we limit ourselves to a review of how certain complexity barriers have been surmounted using newly applied theoretical concepts such as aging, renewal, non-ergodic statistics and the fractional calculus. One emphasis of this review is information transport between complex networks, which requires a fundamental change in perception that we express as a transition from the familiar stochastic resonance to the new concept of complexity matching.

  17. Low complexity interference alignment algorithms for desired signal power maximization problem of MIMO channels

    NASA Astrophysics Data System (ADS)

    Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang

    2012-12-01

    In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.

  18. A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem

    SciTech Connect

    Bayardo, R.J. Jr.; Miranker, D.P.

    1996-12-31

    Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.

  19. Optical tomographic memories: algorithms for the efficient information readout

    NASA Astrophysics Data System (ADS)

    Pantelic, Dejan V.

    1990-07-01

    Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different

  20. Informational analysis involving application of complex information system

    NASA Astrophysics Data System (ADS)

    Ciupak, Clébia; Vanti, Adolfo Alberto; Balloni, Antonio José; Espin, Rafael

    The aim of the present research is performing an informal analysis for internal audit involving the application of complex information system based on fuzzy logic. The same has been applied in internal audit involving the integration of the accounting field into the information systems field. The technological advancements can provide improvements to the work performed by the internal audit. Thus we aim to find, in the complex information systems, priorities for the work of internal audit of a high importance Private Institution of Higher Education. The applied method is quali-quantitative, as from the definition of strategic linguistic variables it was possible to transform them into quantitative with the matrix intersection. By means of a case study, where data were collected via interview with the Administrative Pro-Rector, who takes part at the elaboration of the strategic planning of the institution, it was possible to infer analysis concerning points which must be prioritized at the internal audit work. We emphasize that the priorities were identified when processed in a system (of academic use). From the study we can conclude that, starting from these information systems, audit can identify priorities on its work program. Along with plans and strategic objectives of the enterprise, the internal auditor can define operational procedures to work in favor of the attainment of the objectives of the organization.

  1. Reconstruction of complex signals using minimum Rényi information.

    PubMed

    Frieden, B R; Bajkova, A T

    1995-07-10

    An information divergence, such as Shannon mutual information, measures the distance between two probability-density functions (or images). A wide class of such measures, called α divergences, with desirable properties such as convexity over all space, was defined by Amari. Rényi's information Dα is an α divergence. Because of its convexity property, the minimum of Dα is easily attained. Minimization accomplishes minimum distance (maximum resemblance) between an unknown image and a known reference image. Such a biasing effect permits complex images, such as occur in inverse syntheticaperture- radar imaging, to be well reconstructed. The algorithm permits complex amplitudes to replace the probabilities in the Rényi form. The bias image may be constructed as a smooth version of the linear, Fourier reconstruction of the data. Examples on simulated complex image data with and without noise indicate that the Rényi reconstruction approach permits superresolution in low-noise cases and higher fidelity than ordinary, linear reconstructions in higher-noise cases. PMID:21052233

  2. Three subsets of sequence complexity and their relevance to biopolymeric information.

    PubMed

    Abel, David L; Trevors, Jack T

    2005-01-01

    Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). PMID:16095527

  3. Network algorithms for information analysis using the Titan Toolkit.

    SciTech Connect

    McLendon, William Clarence, III; Baumes, Jeffrey; Wilson, Andrew T.; Wylie, Brian Neil; Shead, Timothy M.

    2010-07-01

    The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.

  4. Patent information - towards simplicity or complexity?

    NASA Astrophysics Data System (ADS)

    Shenton, Written By Kathleen; Norton, Peter; Onodera, Translated By Natsuo

    Since the advent of online services, the ability to search and find chemical patent information has improved immeasurably. Recently, integration of a multitude of files (through file merging as well as cross-file/simultaneous searches), 'intelligent' interfaces and optical technology for large amounts of data seem to achieve greater simplicity and convenience in the retrieval of patent information. In spite of these progresses, there is more essential problem which increases complexity. It is a tendency to expand indefinitely the range of claim for chemical substances by a ultra-generic description of structure (overuse of optional substituents, variable divalent groups, repeating groups, etc.) and long listing of prophetic examples. Not only does this tendency worry producers and searchers of patent databases but also prevents truly worthy inventions in future.

  5. A comparison of computational methods and algorithms for the complex gamma function

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1974-01-01

    A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.

  6. Non-traditional spectral clustering algorithms for the detection of community structure in complex networks: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoke; Gao, Lin

    2011-05-01

    The detection of community structure in complex networks is crucial since it provides insight into the substructures of the whole network. Spectral clustering algorithms that employ the eigenvalues and eigenvectors of an appropriate input matrix have been successfully applied in this field. Despite its empirical success in community detection, spectral clustering has been criticized for its inefficiency when dealing with large scale data sets. This is confirmed by the fact that the time complexity for spectral clustering is cubic with respect to the number of instances; even the memory efficient iterative eigensolvers, such as the power method, may converge slowly to the desired solutions. In efforts to improve the complexity and performance, many non-traditional spectral clustering algorithms have been proposed. Rather than using the real eigenvalues and eigenvectors as in the traditional methods, the non-traditional clusterings employ additional topological structure information characterized by the spectrum of a matrix associated with the network involved, such as the complex eigenvalues and their corresponding complex eigenvectors, eigenspaces and semi-supervised labels. However, to the best of our knowledge, no work has been devoted to comparison among these newly developed approaches. This is the main goal of this paper, through evaluating the effectiveness of these spectral algorithms against some benchmark networks. The experimental results demonstrate that the spectral algorithm based on the eigenspaces achieves the best performance but is the slowest algorithm; the semi-supervised spectral algorithm is the fastest but its performance largely depends on the prior knowledge; and the spectral method based on the complement network shows similar performance to the conventional ones.

  7. On the Time Complexity of Dijkstra's Three-State Mutual Exclusion Algorithm

    NASA Astrophysics Data System (ADS)

    Kimoto, Masahiro; Tsuchiya, Tatsuhiro; Kikuno, Tohru

    In this letter we give a lower bound on the worst-case time complexity of Dijkstra's three-state mutual exclusion algorithm by specifying a concrete behavior of the algorithm. We also show that our result is more accurate than the known best bound.

  8. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  9. A New Algorithm for Complex Non-Orthogonal Joint Diagonalization Based on Shear and Givens Rotations

    NASA Astrophysics Data System (ADS)

    Mesloub, Ammar; Abed-Meraim, Karim; Belouchrani, Adel

    2014-04-01

    This paper introduces a new algorithm to approximate non orthogonal joint diagonalization (NOJD) of a set of complex matrices. This algorithm is based on the Frobenius norm formulation of the JD problem and takes advantage from combining Givens and Shear rotations to attempt the approximate joint diagonalization (JD). It represents a non trivial generalization of the JDi (Joint Diagonalization) algorithm (Souloumiac 2009) to the complex case. The JDi is first slightly modified then generalized to the CJDi (i.e. Complex JDi) using complex to real matrix transformation. Also, since several methods exist already in the literature, we propose herein a brief overview of existing NOJD algorithms then we provide an extensive comparative study to illustrate the effectiveness and stability of the CJDi w.r.t. various system parameters and application contexts.

  10. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks

    PubMed Central

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-01-01

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time. PMID:26230697

  11. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  12. An algorithm for automatic reduction of complex signal flow graphs

    NASA Technical Reports Server (NTRS)

    Young, K. R.; Hoberock, L. L.; Thompson, J. G.

    1976-01-01

    A computer algorithm is developed that provides efficient means to compute transmittances directly from a signal flow graph or a block diagram. Signal flow graphs are cast as directed graphs described by adjacency matrices. Nonsearch computation, designed for compilers without symbolic capability, is used to identify all arcs that are members of simple cycles for use with Mason's gain formula. The routine does not require the visual acumen of an interpreter to reduce the topology of the graph, and it is particularly useful for analyzing control systems described for computer analyses by means of interactive graphics.

  13. Price-Based Information Routing in Complex Satellite Networks for

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Su, J.; Wang, Y.; Wyglinski, A.

    Future space-based situational awareness and space surveillance systems are envisioned to include a large array of satellites that seek to cooperatively achieve full awareness over given space and terrestrial domains. Given the complexity of the communication network architecture of such a system, in this paper we build on the system architecture that was proposed by the presenting author in the 2008 AMOS conference and propose an efficient, adaptable and scalable price-based routing and bandwidth allocation algorithm for the generation, routing and delivery of surveillance information in distributed wireless satellite networks. Due to the potentially large deployments of these satellites, the access points employed in a centralized network control scheme would easily be overwhelmed due to lack of spectral bandwidth, synchronization issues, and multiple access coordination. Alternatively, decentralized schemes could facilitate the flow and transference of information between data gatherers and data collectors via mechanisms such as (multi-hop) routing, allocation of spectral bandwidths per relaying node, and coordination between adjacent nodes. Although there are numerous techniques and concepts focusing on the network operations, control, and management of sensor networks, existing solution approaches require the use of information for routing, allocation, and decision-making that may not be readily available to the satellites in a timely fashion. This is especially true in the literature on price-based routing, where the approach is almost always game theoretic or relies on optimization techniques. Instead of seeking such techniques, in this paper we present algorithms that will (1) be energy-aware, (2) be highly adaptable and responsive to demands and seek delivery of information to desired nodes despite the fact that the source and destination are not globally known, (3) be secure, (4) be efficient in allocating bandwidth, (5) be decentralized and allow for

  14. An Algorithm Combining for Objective Prediction with Subjective Forecast Information

    NASA Astrophysics Data System (ADS)

    Choi, JunTae; Kim, SooHyun

    2016-04-01

    As direct or post-processed output from numerical weather prediction (NWP) models has begun to show acceptable performance compared with the predictions of human forecasters, many national weather centers have become interested in automatic forecasting systems based on NWP products alone, without intervention from human forecasters. The Korea Meteorological Administration (KMA) is now developing an automatic forecasting system for dry variables. The forecasts are automatically generated from NWP predictions using a post processing model (MOS). However, MOS cannot always produce acceptable predictions, and sometimes its predictions are rejected by human forecasters. In such cases, a human forecaster should manually modify the prediction consistently at points surrounding their corrections, using some kind of smart tool to incorporate the forecaster's opinion. This study introduces an algorithm to revise MOS predictions by adding a forecaster's subjective forecast information at neighbouring points. A statistical relation between two forecast points - a neighbouring point and a dependent point - was derived for the difference between a MOS prediction and that of a human forecaster. If the MOS prediction at a neighbouring point is updated by a human forecaster, the value at a dependent point is modified using a statistical relationship based on linear regression, with parameters obtained from a one-year dataset of MOS predictions and official forecast data issued by KMA. The best sets of neighbouring points and dependent point are statistically selected. According to verification, the RMSE of temperature predictions produced by the new algorithm was slightly lower than that of the original MOS predictions, and close to the RMSE of subjective forecasts. For wind speed and relative humidity, the new algorithm outperformed human forecasters.

  15. Information filtering in complex weighted networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo

    2011-04-01

    Many systems in nature, society, and technology can be described as networks, where the vertices are the system’s elements, and edges between vertices indicate the interactions between the corresponding elements. Edges may be weighted if the interaction strength is measurable. However, the full network information is often redundant because tools and techniques from network analysis do not work or become very inefficient if the network is too dense, and some weights may just reflect measurement errors and need to be be discarded. Moreover, since weight distributions in many complex weighted networks are broad, most of the weight is concentrated among a small fraction of all edges. It is then crucial to properly detect relevant edges. Simple thresholding would leave only the largest weights, disrupting the multiscale structure of the system, which is at the basis of the structure of complex networks and ought to be kept. In this paper we propose a weight-filtering technique based on a global null model [Global Statistical Significance (GloSS) filter], keeping both the weight distribution and the full topological structure of the network. The method correctly quantifies the statistical significance of weights assigned independently to the edges from a given distribution. Applications to real networks reveal that the GloSS filter is indeed able to identify relevant connections between vertices.

  16. Information Technology in Complex Health Services

    PubMed Central

    Southon, Frank Charles Gray; Sauer, Chris; Dampney, Christopher Noel Grant (Kit)

    1997-01-01

    Abstract Objective: To identify impediments to the successful transfer and implementation of packaged information systems through large, divisionalized health services. Design: A case analysis of the failure of an implementation of a critical application in the Public Health System of the State of New South Wales, Australia, was carried out. This application had been proven in the United States environment. Measurements: Interviews involving over 60 staff at all levels of the service were undertaken by a team of three. The interviews were recorded and analyzed for key themes, and the results were shared and compared to enable a continuing critical assessment. Results: Two components of the transfer of the system were considered: the transfer from a different environment, and the diffusion throughout a large, divisionalized organization. The analyses were based on the Scott-Morton organizational fit framework. In relation to the first, it was found that there was a lack of fit in the business environments and strategies, organizational structures and strategy-structure pairing as well as the management process-roles pairing. The diffusion process experienced problems because of the lack of fit in the strategy-structure, strategy-structure-management processes, and strategy-structure-role relationships. Conclusion: The large-scale developments of integrated health services present great challenges to the efficient and reliable implementation of information technology, especially in large, divisionalized organizations. There is a need to take a more sophisticated approach to understanding the complexities of organizational factors than has traditionally been the case. PMID:9067877

  17. Measurement and Information Extraction in Complex Dynamics Quantum Computation

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Montangero, Simone

    Quantum Information processing has several di.erent applications: some of them can be performed controlling only few qubits simultaneously (e.g. quantum teleportation or quantum cryptography) [1]. Usually, the transmission of large amount of information is performed repeating several times the scheme implemented for few qubits. However, to exploit the advantages of quantum computation, the simultaneous control of many qubits is unavoidable [2]. This situation increases the experimental di.culties of quantum computing: maintaining quantum coherence in a large quantum system is a di.cult task. Indeed a quantum computer is a many-body complex system and decoherence, due to the interaction with the external world, will eventually corrupt any quantum computation. Moreover, internal static imperfections can lead to quantum chaos in the quantum register thus destroying computer operability [3]. Indeed, as it has been shown in [4], a critical imperfection strength exists above which the quantum register thermalizes and quantum computation becomes impossible. We showed such e.ects on a quantum computer performing an e.cient algorithm to simulate complex quantum dynamics [5,6].

  18. Brain MR image segmentation with spatial constrained K-mean algorithm and dual-tree complex wavelet transform.

    PubMed

    Zhang, Jingdan; Jiang, Wuhan; Wang, Ruichun; Wang, Le

    2014-09-01

    In brain MR images, the noise and low-contrast significantly deteriorate the segmentation results. In this paper, we propose an automatic unsupervised segmentation method integrating dual-tree complex wavelet transform (DT-CWT) with K-mean algorithm for brain MR image. Firstly, a multi-dimensional feature vector is constructed based on the intensity, the low-frequency subband of DT-CWT and spatial position information. Then, a spatial constrained K-mean algorithm is presented as the segmentation system. The proposed method is validated by extensive experiments using both simulated and real T1-weighted MR images, and compared with the state-of-the-art algorithms. PMID:24994513

  19. A comparative study of frequency offset estimations in real and complex OFDM systems using different algorithms

    NASA Astrophysics Data System (ADS)

    Sahu, Swagatika; Mohanty, Saumendra; Srivastav, Richa

    2013-01-01

    Orthogonal Frequency Division Multiplexing (OFDM) is an emerging multi-carrier modulation scheme, which has been adopted for several wireless standards such as IEEE 802.11a and HiperLAN2, etc. A well-known problem of OFDM is its sensitivity to frequency offset between the transmitted and received carrier frequencies. In (OFDM) system Carrier frequency offsets (CFOs) between the transmitter and the receiver destroy the orthogonality between carriers and degrade the system performance significantly. The main problem with frequency offset is that it introduces interference among the multiplicity of carriers in the OFDM signal.The conventional algorithms given by P. Moose and Schmidl describes how carrier frequency offset of an OFDM system can be estimated using training sequences. Simulation results show that the improved carrier frequency offset estimation algorithm which uses a complex training sequence for frequency offset estimation, performs better than conventional P. Moose and Schmidl algorithm, which can effectively improve the frequency estimation accuracy and provides a wide acquisition range for the carrier frequency offset with low complexity. This paper introduces the BER comparisons of different algorithms with the Improved Algorithms for different Real and Complex modulations schemes, considering random carrier offsets . This paper also introduces the BER performances with different CFOs for different Real and Complex modulation schemes for the Improved algorithm.

  20. A novel algorithm for detecting protein complexes with the breadth first search.

    PubMed

    Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi

    2014-01-01

    Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139

  1. On Distribution Reduction and Algorithm Implementation in Inconsistent Ordered Information Systems

    PubMed Central

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems. PMID:25258721

  2. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  3. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  4. Hidden state prediction: a modification of classic ancestral state reconstruction algorithms helps unravel complex symbioses

    PubMed Central

    Zaneveld, Jesse R. R.; Thurber, Rebecca L. V.

    2014-01-01

    Complex symbioses between animal or plant hosts and their associated microbiotas can involve thousands of species and millions of genes. Because of the number of interacting partners, it is often impractical to study all organisms or genes in these host-microbe symbioses individually. Yet new phylogenetic predictive methods can use the wealth of accumulated data on diverse model organisms to make inferences into the properties of less well-studied species and gene families. Predictive functional profiling methods use evolutionary models based on the properties of studied relatives to put bounds on the likely characteristics of an organism or gene that has not yet been studied in detail. These techniques have been applied to predict diverse features of host-associated microbial communities ranging from the enzymatic function of uncharacterized genes to the gene content of uncultured microorganisms. We consider these phylogenetically informed predictive techniques from disparate fields as examples of a general class of algorithms for Hidden State Prediction (HSP), and argue that HSP methods have broad value in predicting organismal traits in a variety of contexts, including the study of complex host-microbe symbioses. PMID:25202302

  5. Information-driven modeling of protein-peptide complexes.

    PubMed

    Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2015-01-01

    Despite their biological importance in many regulatory processes, protein-peptide recognition mechanisms are difficult to study experimentally at the structural level because of the inherent flexibility of peptides and the often transient interactions on which they rely. Complementary methods like biomolecular docking are therefore required. The prediction of the three-dimensional structure of protein-peptide complexes raises unique challenges for computational algorithms, as exemplified by the recent introduction of protein-peptide targets in the blind international experiment CAPRI (Critical Assessment of PRedicted Interactions). Conventional protein-protein docking approaches are often struggling with the high flexibility of peptides whose short sizes impede protocols and scoring functions developed for larger interfaces. On the other side, protein-small ligand docking methods are unable to cope with the larger number of degrees of freedom in peptides compared to small molecules and the typically reduced available information to define the binding site. In this chapter, we describe a protocol to model protein-peptide complexes using the HADDOCK web server, working through a test case to illustrate every steps. The flexibility challenge that peptides represent is dealt with by combining elements of conformational selection and induced fit molecular recognition theories. PMID:25555727

  6. A fast D.F.T. algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.

  7. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  8. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  9. Robust and low complexity localization algorithm based on head-related impulse responses and interaural time difference.

    PubMed

    Wan, Xinwang; Liang, Juan

    2013-01-01

    This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016

  10. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  11. Renyi complexities and information planes: Atomic structure in conjugated spaces

    NASA Astrophysics Data System (ADS)

    Antolín, J.; López-Rosa, S.; Angulo, J. C.

    2009-05-01

    Generalized Renyi complexity measures are defined and numerically analyzed for atomic one-particle densities in both conjugated spaces. These complexities provide, as particular cases, the previously known statistical and Fisher-Shannon complexities. The generalized complexities provide information on the atomic shell structure and shell-filling patterns, allowing to appropriately weight different regions of the electronic cloud.

  12. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  13. An Introduction to Genetic Algorithms and to Their Use in Information Retrieval.

    ERIC Educational Resources Information Center

    Jones, Gareth; And Others

    1994-01-01

    Genetic algorithms, a class of nondeterministic algorithms in which the role of chance makes the precise nature of a solution impossible to guarantee, seem to be well suited to combinatorial-optimization problems in information retrieval. Provides an introduction to techniques and characteristics of genetic algorithms and illustrates their…

  14. Design and demonstration of automated data analysis algorithms for ultrasonic inspection of complex composite panels with bonds

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2016-02-01

    To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.

  15. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  16. Do the Visual Complexity Algorithms Match the Generalization Process in Geographical Displays?

    NASA Astrophysics Data System (ADS)

    Brychtová, A.; Çöltekin, A.; Pászto, V.

    2016-06-01

    In this study, we first develop a hypothesis that existing quantitative visual complexity measures will overall reflect the level of cartographic generalization, and test this hypothesis. Specifically, to test our hypothesis, we first selected common geovisualization types (i.e., cartographic maps, hybrid maps, satellite images and shaded relief maps) and retrieved examples as provided by Google Maps, OpenStreetMap and SchweizMobil by swisstopo. Selected geovisualizations vary in cartographic design choices, scene contents and different levels of generalization. Following this, we applied one of Rosenholtz et al.'s (2007) visual clutter algorithms to obtain quantitative visual complexity scores for screenshots of the selected maps. We hypothesized that visual complexity should be constant across generalization levels, however, the algorithm suggested that the complexity of small-scale displays (less detailed) is higher than those of large-scale (high detail). We also observed vast differences in visual complexity among maps providers, which we attribute to their varying approaches towards the cartographic design and generalization process. Our efforts will contribute towards creating recommendations as to how the visual complexity algorithms could be optimized for cartographic products, and eventually be utilized as a part of the cartographic design process to assess the visual complexity.

  17. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    NASA Astrophysics Data System (ADS)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-04-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  18. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    NASA Astrophysics Data System (ADS)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  19. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm

    PubMed Central

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg–Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher “denoise” capacity, with a larger range for initial guess values. PMID:25914561

  20. Fast algorithm for minutiae matching based on multiple-ridge information

    NASA Astrophysics Data System (ADS)

    Wang, Guoyou; Hu, Jing

    2001-09-01

    Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.

  1. Applications of the complexity space to the General Probabilistic Divide and Conquer Algorithms

    NASA Astrophysics Data System (ADS)

    García-Raffi, L. M.; Romaguera, S.; Schellekens, M. P.

    2008-12-01

    Schellekens [M. Schellekens, The Smyth completion: A common foundation for denotational semantics and complexity analysis, in: Proc. MFPS 11, in: Electron. Notes Theor. Comput. Sci., vol. 1, 1995, pp. 535-556], and Romaguera and Schellekens [S. Romaguera, M. Schellekens, Quasi-metric properties of complexity spaces, Topology Appl. 98 (1999) 311-322] introduced a topological foundation to obtain complexity results through the application of Semantic techniques to Divide and Conquer Algorithms. This involved the fact that the complexity (quasi-metric) space is Smyth complete and the use of a version of the Banach fixed point theorem and improver functionals. To further bridge the gap between Semantics and Complexity, we show here that these techniques of analysis, based on the theory of complexity spaces, extend to General Probabilistic Divide and Conquer schema discussed by Flajolet [PE Flajolet, Analytic analysis of algorithms, in: W. Kuich (Ed.), 19th Internat. Colloq. ICALP'92, Vienna, July 1992; Automata, Languages and Programming, in: Lecture Notes in Comput. Sci., vol. 623, 1992, pp. 186-210]. In particular, we obtain a general method which is useful to show that for several recurrence equations based on the recursive structure of General Probabilistic Divide and Conquer Algorithms, the associated functionals have a unique fixed point which is the solution for the corresponding recurrence equation.

  2. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  3. a Simple Algorithm to Enforce Dirichlet Boundary Conditions in Complex Geometries

    NASA Astrophysics Data System (ADS)

    Huber, Christian; Dufek, Josef; Chopard, Bastien

    We present a new algorithm to implement Dirichlet boundary conditions for diffusive processes in arbitrarily complex geometries. In this approach, the boundary conditions around the diffusing object is replaced by the fictitious phase transition of a pure substance where the energy cost of the phase transition largely overwhelms the amount of energy stored in the system. The computing cost of this treatment of the boundary condition is independent of the topology of the boundary. Moreover, the implementation of this new approach is straightforward and follows naturally from enthalpy-based numerical methods. This algorithm is compatible with a wide variety of discretization methods, finite differences, finite volume, lattice Boltzmann methods and finite elements, to cite a few. We show, here, using both lattice Boltzmann and finite-volume methods that our model is in excellent agreement with analytical solutions for high symmetry geometries. We also illustrate the advantages of the algorithm to handle more complex geometries.

  4. Quantifying networks complexity from information geometry viewpoint

    SciTech Connect

    Felice, Domenico Mancini, Stefano; Pettini, Marco

    2014-04-15

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy on simplexes and find that it monotonically increases with their dimension.

  5. Can complexity science inform physician leadership development?

    PubMed

    Grady, Colleen Marie

    2016-07-01

    Purpose The purpose of this paper is to describe research that examined physician leadership development using complexity science principles. Design/methodology/approach Intensive interviewing of 21 participants and document review provided data regarding physician leadership development in health-care organizations using five principles of complexity science (connectivity, interdependence, feedback, exploration-of-the-space-of-possibilities and co-evolution), which were grouped in three areas of inquiry (relationships between agents, patterns of behaviour and enabling functions). Findings Physician leaders are viewed as critical in the transformation of healthcare and in improving patient outcomes, and yet significant challenges exist that limit their development. Leadership in health care continues to be associated with traditional, linear models, which are incongruent with the behaviour of a complex system, such as health care. Physician leadership development remains a low priority for most health-care organizations, although physicians admit to being limited in their capacity to lead. This research was based on five principles of complexity science and used grounded theory methodology to understand how the behaviours of a complex system can provide data regarding leadership development for physicians. The study demonstrated that there is a strong association between physician leadership and patient outcomes and that organizations play a primary role in supporting the development of physician leaders. Findings indicate that a physician's relationship with their patient and their capacity for innovation can be extended as catalytic behaviours in a complex system. The findings also identified limiting factors that impact physicians who choose to lead, such as reimbursement models that do not place value on leadership and medical education that provides minimal opportunity for leadership skill development. Practical Implications This research provides practical

  6. Star pattern recognition algorithm aided by inertial information

    NASA Astrophysics Data System (ADS)

    Liu, Bao; Wang, Ke-dong; Zhang, Chao

    2011-08-01

    Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.

  7. Complex Dynamics in Information Sharing Networks

    NASA Astrophysics Data System (ADS)

    Cronin, Bruce

    This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.

  8. HKC: an algorithm to predict protein complexes in protein-protein interaction networks.

    PubMed

    Wang, Xiaomin; Wang, Zhengzhi; Ye, Jun

    2011-01-01

    With the availability of more and more genome-scale protein-protein interaction (PPI) networks, research interests gradually shift to Systematic Analysis on these large data sets. A key topic is to predict protein complexes in PPI networks by identifying clusters that are densely connected within themselves but sparsely connected with the rest of the network. In this paper, we present a new topology-based algorithm, HKC, to detect protein complexes in genome-scale PPI networks. HKC mainly uses the concepts of highest k-core and cohesion to predict protein complexes by identifying overlapping clusters. The experiments on two data sets and two benchmarks show that our algorithm has relatively high F-measure and exhibits better performance compared with some other methods. PMID:22174556

  9. Algorithmic complexity. A new approach of non-linear algorithms for the analysis of atrial signals from multipolar basket catheter.

    PubMed

    Pitschner, H F; Berkowitsch, A

    2001-01-01

    Symbolic dynamics as a non linear method and computation of the normalized algorithmic complexity (C alpha) was applied to basket-catheter mapping of atrial fibrillation (AF) in the right human atrium. The resulting different degrees of organisation of AF have been compared to conventional classification of Wells. Short time temporal and spatial distribution of the C alpha during AF and effects of propafenone on this distribution have been investigated in 30 patients. C alpha was calculated for a moving window. Generated C alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C alpha distribution was statistically analyzed. Inter-regional C alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C alpha increase in cranio-caudal direction was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity. PMID:11889958

  10. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    PubMed

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). PMID:25442399

  11. Face detection in complex background based on Adaboost algorithm and YCbCr skin color model

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.

  12. A novel protein complex identification algorithm based on Connected Affinity Clique Extension (CACE).

    PubMed

    Li, Peng; He, Tingting; Hu, Xiaohua; Zhao, Junmin; Shen, Xianjun; Zhang, Ming; Wang, Yan

    2014-06-01

    A novel algorithm based on Connected Affinity Clique Extension (CACE) for mining overlapping functional modules in protein interaction network is proposed in this paper. In this approach, the value of protein connected affinity which is inferred from protein complexes is interpreted as the reliability and possibility of interaction. The protein interaction network is constructed as a weighted graph, and the weight is dependent on the connected affinity coefficient. The experimental results of our CACE in two test data sets show that the CACE can detect the functional modules much more effectively and accurately when compared with other state-of-art algorithms CPM and IPC-MCE. PMID:24803142

  13. A reduced-complexity data-fusion algorithm using belief propagation for location tracking in heterogeneous observations.

    PubMed

    Chiou, Yih-Shyh; Tsai, Fuan

    2014-06-01

    This paper presents a low-complexity and high-accuracy algorithm to reduce the computational load of the traditional data-fusion algorithm with heterogeneous observations for location tracking. For the location-estimation technique with the data fusion of radio-based ranging measurement and speed-based sensing measurement, the proposed tracking scheme, based on the Bayesian filtering concept, is handled by a state space model. The location tracking problem is divided into many mutual-interaction local constraints with the inherent message- passing features of factor graphs. During each iteration cycle, the messages with reliable information are passed efficiently between the prediction phase and the correction phase to simplify the data-fusion implementation for tracking the location of the mobile terminal. Numerical simulations show that the proposed forward and one-step backward refining tracking approach that combines radio ranging with speed sensing measurements for data fusion not only can achieve an accurate location close to that of the traditional Kalman filtering data-fusion algorithm, but also has much lower computational complexity. PMID:24013831

  14. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  15. MTG2: an efficient algorithm for multivariate linear mixed model analysis based on genomic information

    PubMed Central

    Lee, S. H.; van der Werf, J. H. J.

    2016-01-01

    Summary: We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method is substantially faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss the advantages and limitations. Availability and implementation: MTG2 is available in https://sites.google.com/site/honglee0707/mtg2. Contact: hong.lee@une.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755623

  16. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms. PMID:24010024

  17. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  18. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  19. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  20. A Survey of Stemming Algorithms in Information Retrieval

    ERIC Educational Resources Information Center

    Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime

    2014-01-01

    Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…

  1. Infrared image non-rigid registration based on regional information entropy demons algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Chaoliang; Ma, Lihua; Yu, Ming; Cui, Shumin; Wu, Qingrong

    2015-02-01

    Infrared imaging fault detection which is treated as an ideal, non-contact, non-destructive testing method is applied to the circuit board fault detection. Since Infrared images obtained by handheld infrared camera with wide-angle lens have both rigid and non-rigid deformations. To solve this problem, a new demons algorithm based on regional information entropy was proposed. The new method overcame the shortcomings of traditional demons algorithm that was sensitive to the intensity. First, the information entropy image was gotten by computing regional information entropy of the image. Then, the deformation between the two images was calculated that was the same as demons algorithm. Experimental results demonstrated that the proposed algorithm has better robustness in intensity inconsistent images registration compared with the traditional demons algorithm. Achieving accurate registration between intensity inconsistent infrared images provided strong support for the temperature contrast.

  2. A novel algorithm for simplification of complex gene classifiers in cancer

    PubMed Central

    Wilson, Raphael A.; Teng, Ling; Bachmeyer, Karen M.; Bissonnette, Mei Lin Z.; Husain, Aliya N.; Parham, David M.; Triche, Timothy J.; Wing, Michele R.; Gastier-Foster, Julie M.; Barr, Frederic G.; Hawkins, Douglas S.; Anderson, James R.; Skapek, Stephen X.; Volchenboum, Samuel L.

    2013-01-01

    The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing fifty-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct data sets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to demonstrate the generalizability of our algorithm, we applied it to a lung cancer data set to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use. PMID:23913937

  3. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  4. Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    1995-01-01

    Presents an overview of artificial-intelligence-based inductive learning techniques and their use in information science research. Three methods are discussed: the connectionist Hopfield network; the symbolic ID3/ID5R; evolution-based genetic algorithms. The knowledge representations and algorithms of these methods are examined in the context of…

  5. Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.

    PubMed

    Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector

    2016-03-01

    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology. PMID:25761393

  6. Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids

    SciTech Connect

    Forest, Mark Gregory

    2014-05-06

    The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.

  7. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    SciTech Connect

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  8. Mutual information image registration based on improved bee evolutionary genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Tu, Jingzhi

    2009-07-01

    In recent years, the mutual information is regarded as a more efficient similarity metrics in the image registration. According to the features of mutual information image registration, the Bee Evolution Genetic Algorithm (BEGA) is chosen for optimizing parameters, which imitates swarm mating. Besides, we try our best adaptively set the initial parameters to improve the BEGA. The programming result shows the wonderful precision of the algorithm.

  9. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    NASA Technical Reports Server (NTRS)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  10. An ant colony based algorithm for overlapping community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di

    2015-06-01

    Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.

  11. Measurement of Information-Based Complexity in Listening.

    ERIC Educational Resources Information Center

    Bishop, Walton B.

    When people say that what they hear is "over their heads," they are describing a severe information-based complexity (I-BC) problem. They cannot understand what is said because some of the information needed is missing, contaminated, and/or costly to obtain. Students often face these I-BC problems, and teachers often exacerbate them. Yet listeners…

  12. A novel approach to characterize information radiation in complex networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyang; Wang, Ying; Zhu, Lin; Li, Chao

    2016-06-01

    The traditional research of information dissemination is mostly based on the virus spreading model that the information is being spread by probability, which does not match very well to the reality, because the information that we receive is always more or less than what was sent. In order to quantitatively describe variations in the amount of information during the spreading process, this article proposes a safety information radiation model on the basis of communication theory, combining with relevant theories of complex networks. This model comprehensively considers the various influence factors when safety information radiates in the network, and introduces some concepts from the communication theory perspective, such as the radiation gain function, receiving gain function, information retaining capacity and information second reception capacity, to describe the safety information radiation process between nodes and dynamically investigate the states of network nodes. On a micro level, this article analyzes the influence of various initial conditions and parameters on safety information radiation through the new model simulation. The simulation reveals that this novel approach can reflect the variation of safety information quantity of each node in the complex network, and the scale-free network has better "radiation explosive power", while the small-world network has better "radiation staying power". The results also show that it is efficient to improve the overall performance of network security by selecting nodes with high degrees as the information source, refining and simplifying the information, increasing the information second reception capacity and decreasing the noises. In a word, this article lays the foundation for further research on the interactions of information and energy between internal components within complex systems.

  13. Using object-based analysis to derive surface complexity information for improved filtering of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Yan, Menglong; Blaschke, Thomas; Tang, Hongzhao; Xiao, Chenchao; Sun, Xian; Zhang, Daobing; Fu, Kun

    2016-03-01

    Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Generating seed points is an initial step for most filtering algorithms, whereas existing algorithms usually define a regular window size to generate seed points. This may lead to an inadequate density of seed points, and further introduce error type I, especially in steep terrain and forested areas. In this study, we propose the use of objectbased analysis to derive surface complexity information from ALS datasets, which can then be used to improve seed point generation.We assume that an area is complex if it is composed of many small objects, with no buildings within the area. Using these assumptions, we propose and implement a new segmentation algorithm based on a grid index, which we call the Edge and Slope Restricted Region Growing (ESRGG) algorithm. Surface complexity information is obtained by statistical analysis of the number of objects derived by segmentation in each area. Then, for complex areas, a smaller window size is defined to generate seed points. Experimental results show that the proposed algorithm could greatly improve the filtering results in complex areas, especially in steep terrain and forested areas.

  14. A two-dimensional Segmented Boundary Algorithm for complex moving solid boundaries in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Khorasanizade, Sh.; Sousa, J. M. M.

    2016-03-01

    A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.

  15. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  16. An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network

    PubMed Central

    Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang

    2014-01-01

    Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319

  17. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  18. Information Center Complex publications and presentations, 1971-1980

    SciTech Connect

    Gill, A.B.; Hawthorne, S.W.

    1981-08-01

    This indexed bibliography lists publications and presentations of the Information Center Complex, Information Division, Oak Ridge National Laboratory, from 1971 through 1980. The 659 entries cover such topics as toxicology, air and water pollution, management and transportation of hazardous wastes, energy resources and conservation, and information science. Publications range in length from 1 page to 3502 pages and include topical reports, books, journal articles, fact sheets, and newsletters. Author, title, and group indexes are provided. Annual updates are planned.

  19. Non-algorithmic access to calendar information in a calendar calculator with autism.

    PubMed

    Mottron, L; Lemmens, K; Gagnon, L; Seron, X

    2006-02-01

    The possible use of a calendar algorithm was assessed in DBC, an autistic "savant" of normal measured intelligence. Testing of all the dates in a year revealed a random distribution of errors. Re-testing DBC on the same dates one year later shows that his errors were not stable across time. Finally, DBC was able to answer "reversed" questions that cannot be solved by a classical algorithm. These findings favor a non-algorithmic retrieval of calendar information. It is proposed that multidirectional, non-hierarchical retrieval of information, and solving problems in a non-algorithmic way, are involved in savant performances. The possible role of a functional rededication of low-level perceptual systems to the processing of symbolic information in savants is discussed. PMID:16453069

  20. Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity

    PubMed Central

    Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.

    2013-01-01

    Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of

  1. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  2. Research on non rigid registration algorithm of DCE-MRI based on mutual information and optical flow

    NASA Astrophysics Data System (ADS)

    Yu, Shihua; Wang, Rui; Wang, Kaiyu; Xi, Mengmeng; Zheng, Jiashuo; Liu, Hui

    2015-07-01

    Image matching plays a very important role in the field of medical image, while the two image registration methods based on the mutual information and the optical flow are very effective. The experimental results show that the two methods have their prominent advantages. The method based on mutual information is good for the overall displacement, while the method based on optical flow is very sensitive to small deformation. In the breast DCE-MRI images studied in this paper, there is not only overall deformation caused by the patient, but also non rigid small deformation caused by respiratory deformation. In view of the above situation, the single-image registration algorithms cannot meet the actual needs of complex situations. After a comprehensive analysis to the advantages and disadvantages of these two methods, this paper proposes a registration algorithm of combining mutual information with optical flow field, and applies subtraction images of the reference image and the floating image as the main criterion to evaluate the registration effect, at the same time, applies the mutual information between image sequence values as auxiliary criterion. With the test of the example, this algorithm has obtained a better accuracy and reliability in breast DCE-MRI image sequences.

  3. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  4. Detecting protein complexes in protein interaction networks using a ranking algorithm with a refined merging procedure

    PubMed Central

    2014-01-01

    Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073

  5. Using multiple perspectives to suppress information and complexity

    SciTech Connect

    Kelsey, R.L. |; Webster, R.B.; Hartley, R.T.

    1998-09-01

    Dissemination of battlespace information involves getting information to particular warfighters that is both useful and in a form that facilitates the tasks of those particular warfighters. There are two issues which motivate this problem of dissemination. The first issue deals with disseminating pertinent information to a particular warfighter. This can be thought of as information suppression. The second issue deals with facilitating the use of the information by tailoring the computer interface to the specific tasks of an individual warfighter. This can be thought of as interface complexity suppression. This paper presents a framework for suppressing information using an object-based knowledge representation methodology. This methodology has the ability to represent knowledge and information in multiple perspectives. Information can be suppressed by creating a perspective specific to an individual warfighter. In this way, only the information pertinent and useful to a warfighter is made available to that warfighter. Information is not removed, lost, or changed, but spread among multiple perspectives. Interface complexity is managed in a similar manner. Rather than have one generalized computer interface to access all information, the computer interface can be divided into interface elements. Interface elements can then be selected and arranged into a perspective-specific interface. This is done in a manner to facilitate completion of tasks contained in that perspective. A basic battlespace domain containing ground and air elements and associated warfighters is used to exercise the methodology.

  6. Simple and Robust Realtime QRS Detection Algorithm Based on Spatiotemporal Characteristic of the QRS Complex.

    PubMed

    Kim, Jinkwon; Shin, Hangsik

    2016-01-01

    The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949

  7. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  8. Simple and Robust Realtime QRS Detection Algorithm Based on Spatiotemporal Characteristic of the QRS Complex

    PubMed Central

    Kim, Jinkwon; Shin, Hangsik

    2016-01-01

    The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949

  9. Representing Uncertain Geographical Information with Algorithmic Map Caricatures

    NASA Astrophysics Data System (ADS)

    Brunsdon, Chris

    2016-04-01

    A great deal of geographical information - including the results ion data analysis - is imprecise in some way. For example the the results of geostatistical interpolation should consist not only of point estimates of the value of some quantity at points in space, but also of confidence intervals or standard errors of these estimators. Similarly, mappings of contour lines derived form such interpolations will also be characterised by uncertainty. However, most computerized cartography tools are designed to provide 'crisp' representations of geographical information, such as sharply drawn lines, or clearly delineated areas. In this talk, the use of 'fuzzy' or 'sketchy' cartographic tools will be demonstrated - where maps have a hand-drawn appearance and the degree of 'roughness' and other related characteristics can be used to convey the degree of uncertainty associated with estimated quantities being mapped. The tools used to do this are available as an R package, which will be described in the talk.

  10. A Lip Extraction Algorithm by Using Color Information Considering Obscurity

    NASA Astrophysics Data System (ADS)

    Shirasawa, Yoichi; Nishida, Makoto

    This paper proposes a method for extracting lip shape and its location from sequential facial images by using color information. The proposed method has no need of extra information on a position nor a form in advance. It is also carried out without special conditions such as lipstick or lighting. Psychometric quantities of a metric hue angle, a metric hue difference and a rectangular coordinates, which are defined in CIE 1976 L*a*b* color space, are used for the extraction. The method employs fuzzy reasoning in order to consider obscurity in image data such as shade on the face. The experimental result indicate the effectiveness of the proposed method; 100 percent of facial images data was estimated a lip’s position, and about 94 percent of facial images data was extracted its shape.

  11. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  12. Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems

    NASA Astrophysics Data System (ADS)

    Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing

    2014-01-01

    Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.

  13. Detection of overlapping protein complexes in gene expression, phenotype and pathways of Saccharomyces cerevisiae using Prorank based Fuzzy algorithm.

    PubMed

    Manikandan, P; Ramyachitra, D; Banupriya, D

    2016-04-15

    Proteins show their functional activity by interacting with other proteins and forms protein complexes since it is playing an important role in cellular organization and function. To understand the higher order protein organization, overlapping is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Most of the clustering algorithms do not consider the weighted as well as overlapping complexes. In this research, Prorank based Fuzzy algorithm has been proposed to find the overlapping protein complexes. The Fuzzy detection algorithm is incorporated in the Prorank algorithm after ranking step to find the overlapping community. The proposed algorithm executes in an iterative manner to compute the probability of robust clusters. The proposed and the existing algorithms were tested on different datasets such as PPI-D1, PPI-D2, Collins, DIP, Krogan Core and Krogan-Extended, gene expression such as GSE7645, GSE22269, GSE26923, pathways such as Meiosis, MAPK, Cell Cycle, phenotypes such as Yeast Heterogeneous and Yeast Homogeneous datasets. The experimental results show that the proposed algorithm predicts protein complexes with better accuracy compared to other state of art algorithms. PMID:26809099

  14. Deconvolution of complex spectra into components by the bee swarm algorithm

    NASA Astrophysics Data System (ADS)

    Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh

    2016-05-01

    The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.

  15. Simple algorithm for computing the communication complexity of quantum communication processes

    NASA Astrophysics Data System (ADS)

    Hansen, A.; Montina, A.; Wolf, S.

    2016-04-01

    A two-party quantum communication process with classical inputs and outcomes can be simulated by replacing the quantum channel with a classical one. The minimal amount of classical communication required to reproduce the statistics of the quantum process is called its communication complexity. In the case of many instances simulated in parallel, the minimal communication cost per instance is called the asymptotic communication complexity. Previously, we reduced the computation of the asymptotic communication complexity to a convex minimization problem. In most cases, the objective function does not have an explicit analytic form, as the function is defined as the maximum over an infinite set of convex functions. Therefore, the overall problem takes the form of a minimax problem and cannot directly be solved by standard optimization methods. In this paper, we introduce a simple algorithm to compute the asymptotic communication complexity. For some special cases with an analytic objective function one can employ available convex-optimization libraries. In the tested cases our method turned out to be notably faster. Finally, using our method we obtain 1.238 bits as a lower bound on the asymptotic communication complexity of a noiseless quantum channel with the capacity of 1 qubit. This improves the previous bound of 1.208 bits.

  16. A Selective Encryption Algorithm Based on AES for Medical Information

    PubMed Central

    Oh, Ju-Young; Chon, Ki-Hwan

    2010-01-01

    Objectives The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. Methods We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C++ and it was compiled with Code::Blocks using a MinGW GCC compiler. Results The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. Conclusions As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest. PMID:21818420

  17. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  18. Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining

    NASA Astrophysics Data System (ADS)

    Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio

    2013-12-01

    Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.

  19. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm

    PubMed Central

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071

  20. IR and visual image registration based on mutual information and PSO-Powell algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Youwen; Gao, Kun; Miu, Xianghu

    2014-11-01

    Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.

  1. Characterizing informative sequence descriptors and predicting binding affinities of heterodimeric protein complexes

    PubMed Central

    2015-01-01

    Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein

  2. A multi-agent genetic algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Li, Zhangtao; Liu, Jing

    2016-05-01

    Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.

  3. Toward a Deterministic Polynomial Time Algorithm with Optimal Additive Query Complexity

    NASA Astrophysics Data System (ADS)

    Bshouty, Nader H.; Mazzawi, Hanna

    In this paper, we study two combinatorial search problems: The coin weighing problem with a spring scale (also known as the vector reconstructing problem using additive queries) and the problem of reconstructing weighted graphs using additive queries. Suppose we are given n identical looking coins. Suppose that m out of the n coins are counterfeit and the rest are authentic. Assume that we are allowed to weigh subsets of coins with a spring scale. It is known that the optimal number of weighing for identifying the counterfeit coins and their weights is at least Ω(mlog n/log m). We give a deterministic polynomial time adaptive algorithm for identifying the counterfeit coins and their weights using O(mlog n/log m+ mlog log m) weighings, assuming that the weight of the counterfeit coins are greater than the weight of the authentic coin. This algorithm is optimal when m ≤ n c/loglogn , where c is any constant. Also our weighing complexity is within loglogm times the optimal complexity for all m.

  4. An algorithm to find critical execution paths of software based on complex network

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Bing; Ren, Rong; Ren, Jiadong

    2015-01-01

    The critical execution paths play an important role in software system in terms of reducing the numbers of test date, detecting the vulnerabilities of software structure and analyzing software reliability. However, there are no efficient methods to discover them so far. Thus in this paper, a complex network-based software algorithm is put forward to find critical execution paths (FCEP) in software execution network. First, by analyzing the number of sources and sinks in FCEP, software execution network is divided into AOE subgraphs, and meanwhile, a Software Execution Network Serialization (SENS) approach is designed to generate execution path set in each AOE subgraph, which not only reduces ring structure's influence on path generation, but also guarantees the nodes' integrity in network. Second, according to a novel path similarity metric, similarity matrix is created to calculate the similarity among sets of path sequences. Third, an efficient method is taken to cluster paths through similarity matrices, and the maximum-length path in each cluster is extracted as the critical execution path. At last, a set of critical execution paths is derived. The experimental results show that the FCEP algorithm is efficient in mining critical execution path under software complex network.

  5. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  6. Algorithmic information theory and the hidden variable question

    NASA Astrophysics Data System (ADS)

    Fuchs, Christopher

    1992-02-01

    The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.

  7. Algorithmic information theory and the hidden variable question

    NASA Technical Reports Server (NTRS)

    Fuchs, Christopher

    1992-01-01

    The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.

  8. Deciphering the Minimal Algorithm for Development and Information-genesis

    NASA Astrophysics Data System (ADS)

    Li, Zhiyuan; Tang, Chao; Li, Hao

    During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.

  9. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization. PMID:25706894

  10. Dynamics of rumor-like information dissemination in complex networks

    NASA Astrophysics Data System (ADS)

    Nekovee, Maziar; Moreno, Yamir; Bianconi, Ginestra

    2005-03-01

    An important dynamic process that takes place in complex networks is the spreading of information via rumor-like mechanisms. In addition to their relevance to propagation of rumors and fads in human society, such mechanism are also the basis of an important class of collective communication protocols in complex computer networks, such as the Internet and the peer-to-peer systems. In this talk we present results of our analytical, numerical and large-scale Monte Carlo simulation studies of this process on several classes of complex networks, including random graphs, scale-free networks, and random and small-world topological graphs. Our studies point out to important differences between the dynamics of rumor spreading and that of virus spreading in such networks, and provide new insights into the complex interplay between the spreading phenomena and network topology.

  11. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  12. Crater detection, classification and contextual information extraction in lunar images using a novel algorithm

    NASA Astrophysics Data System (ADS)

    Vijayan, S.; Vani, K.; Sanjeevi, S.

    2013-09-01

    This study presents the development and implementation of an algorithm for automatic detection, classification and contextual information such as ejecta and the status of degradation of the lunar craters using SELENE panchromatic images. This algorithm works by a three-step process; first, the algorithm detects the simple lunar craters and classifies them into round/flat-floor using the structural profile pattern. Second, it extracts contextual information (ejecta) and notifies their presence if any, and associates it to the corresponding crater using the role of adjacency rule and the Markov random field theory. Finally, the algorithm examines each of the detected craters and assesses its state of degradation using the intensity variation over the crater edge. We applied the algorithm to 16 technically demanding test sites, which were chosen in a manner to represent all possible lunar surface conditions. Crater detection algorithm evaluation was carried out by means of manual analysis for their accuracy in detection, classification, ejecta and degraded-state identification along with a detailed qualitative assessment. The manual analysis depicts that the results are in agreement with the detection, while the overall statistical results reveal the detection performance as: Q ∼ 75% and precision ∼0.83. The results of detection and classification reveal that the simple lunar craters are dominated by the round-floor type rather than flat-floor type. In addition, the results also depicts that the lunar surface is predominant with sub-kilometer craters of lesser depth.

  13. PREFACE: Complex Networks: from Biology to Information Technology

    NASA Astrophysics Data System (ADS)

    Barrat, A.; Boccaletti, S.; Caldarelli, G.; Chessa, A.; Latora, V.; Motter, A. E.

    2008-06-01

    The field of complex networks is one of the most active areas in contemporary statistical physics. Ten years after seminal work initiated the modern study of networks, interest in the field is in fact still growing, as indicated by the ever increasing number of publications in network science. The reason for such a resounding success is most likely the simplicity and broad significance of the approach that, through graph theory, allows researchers to address a variety of different complex systems within a common framework. This special issue comprises a selection of contributions presented at the workshop 'Complex Networks: from Biology to Information Technology' held in July 2007 in Pula (Cagliari), Italy as a satellite of the general conference STATPHYS23. The contributions cover a wide range of problems that are currently among the most important questions in the area of complex networks and that are likely to stimulate future research. The issue is organised into four sections. The first two sections describe 'methods' to study the structure and the dynamics of complex networks, respectively. After this methodological part, the issue proceeds with a section on applications to biological systems. The issue closes with a section concentrating on applications to the study of social and technological networks. The first section, entitled Methods: The Structure, consists of six contributions focused on the characterisation and analysis of structural properties of complex networks: The paper Motif-based communities in complex networks by Arenas et al is a study of the occurrence of characteristic small subgraphs in complex networks. These subgraphs, known as motifs, are used to define general classes of nodes and their communities by extending the mathematical expression of the Newman-Girvan modularity. The same line of research, aimed at characterising network structure through the analysis of particular subgraphs, is explored by Bianconi and Gulbahce in Algorithm

  14. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and

  15. DNA taxonomy in morphologically plastic taxa: algorithmic species delimitation in the Boodlea complex (Chlorophyta: Cladophorales).

    PubMed

    Leliaert, Frederik; Verbruggen, Heroen; Wysor, Brian; De Clerck, Olivier

    2009-10-01

    DNA-based taxonomy provides a convenient and reliable tool for species delimitation, especially in organisms in which morphological discrimination is difficult or impossible, such as many algal taxa. A group with a long history of confusing species circumscriptions is the morphologically plastic Boodlea complex, comprising the marine green algal genera Boodlea, Cladophoropsis, Phyllodictyon and Struveopsis. In this study, we elucidate species boundaries in the Boodlea complex by analysing nrDNA internal transcribed spacer sequences from 175 specimens collected from a wide geographical range. Algorithmic methods of sequence-based species delineation were applied, including statistical parsimony network analysis, and a maximum likelihood approach that uses a mixed Yule-coalescent model and detects species boundaries based on differences in branching rates at the level of species and populations. Sequence analyses resulted in the recognition of 13 phylogenetic species, although we failed to detect sharp species boundaries, possibly as a result of incomplete reproductive isolation. We found considerable conflict between traditional and phylogenetic species definitions. Identical morphological forms were distributed in different clades (cryptic diversity), and at the same time most of the phylogenetic species contained a mixture of different morphologies (indicating intraspecific morphological variation). Sampling outside the morphological range of the Boodlea complex revealed that the enigmatic, sponge-associated Cladophoropsis (Spongocladia) vaucheriiformis, also falls within the Boodlea complex. Given the observed evolutionary complexity and nomenclatural problems associated with establishing a Linnaean taxonomy for this group, we propose to discard provisionally the misleading morphospecies and genus names, and refer to clade numbers within a single genus, Boodlea. PMID:19524052

  16. BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes.

    PubMed

    Lakizadeh, Amir; Jalili, Saeed

    2016-01-01

    Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706

  17. BiCAMWI: A Genetic-Based Biclustering Algorithm for Detecting Dynamic Protein Complexes

    PubMed Central

    Lakizadeh, Amir; Jalili, Saeed

    2016-01-01

    Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions (i.e., biclusters) in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. PMID:27462706

  18. Information and complexity measures in molecular reactivity studies.

    PubMed

    Welearegay, Meressa A; Balawender, Robert; Holas, Andrzej

    2014-07-28

    The analysis of the information and complexity measures as tools for the investigation of the chemical reactivity has been done in the spin-position and the position spaces, for the density and shape representations. The concept of the transferability and additivity of atoms or functional groups were used as "checkpoints" in the analysis of obtained results. The shape function as an argument of various measures reveals less information than the spinor density. Use of the shape function can yield wrong conclusions when the information measures such as the Shannon entropy (SE, S), the Fisher information (FI, I), the Onicescu information (OI, D), and complexities based on them are used for the systems with different electron numbers. Results obtained in the spinor-density representation show the transferability and additivity (while lacking in the case of the shape representation). The group transferability is well illustrated in the example of the X-Y molecules and their benzene derivatives. Another example is the methyl group transferability presented on the alkane-alkene-alkyne set. Analysis of the results displayed on planes between the three information-theoretical (IT) based measures has shown that the S-I plane provides "richer" information about the pattern, organization, similarity of used molecules than the I-D and D-S planes. The linear relation of high accuracy is noted between the kinetic energy and the FI and the OI measures. Another interesting regression was found between the atomization total energy and the atomization entropy. Unfortunately, the lack of the group electronic energy transferability indicates that no general relations between the IT measures and the chemical reactivity indices are observed. The molecular set chosen for the study includes different types of molecules with various functional groups (19 groups). The used set is large enough (more than 700 molecules) and diverse to improve the previous understating of molecular complexities

  19. A multi-objective discrete cuckoo search algorithm with local search for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin

    2016-03-01

    Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.

  20. Developments of global greenhouse gas retrieval algorithm using Aerosol information from GOSAT-CAI

    NASA Astrophysics Data System (ADS)

    Kim, Woogyung; kim, Jhoon; Jung, Yeonjin; lee, Hanlim; Boesch, Hartmut

    2014-05-01

    Human activities have resulted in increasing atmospheric CO2 concentration since the beginning of Industrial Revolution to reaching CO2 concentration over 400 ppm at Mauna Loa observatory for the first time. (IPCC, 2007). However, our current knowledge of carbon cycle is still insufficient due to lack of observations. Satellite measurement is one of the most effective approaches to improve the accuracy of carbon source and sink estimates by monitoring the global CO2 distributions with high spatio-temporal resolutions (Rayner and O'Brien, 2001; Houweling et al., 2004). Currently, GOSAT has provided valuable information to observe global CO2 trend, enables our extended understanding of CO2 and preparation for future satellite plan. However, due to its physical limitation, GOSAT CO2 retrieval results have low spatial resolution and cannot cover wide area. Another obstruction of GOSAT CO2 retrieval is low data availability mainly due to contamination by clouds and aerosols. Especially, in East Asia, one of the most important aerosol source areas, it is hard to have successful retrieval result due to high aerosol concentration. The main purpose of this study is to improve data availability of GOSAT CO2 retrieval. In this study, current state of CO2 retrieval algorithm development is introduced and preliminary results are shown. This algorithm is based on optimal estimation method and utilized VLIDORT the vector discrete ordinate radiative transfer model. This proto type algorithm, developed from various combinations of state vectors to find accurate CO2 concentration, shows reasonable result. Especially the aerosol retrieval algorithm using GOSAT-CAI measurements, which provide aerosol information for the same area with GOSAT-FTS measurements, are utilized as input data of CO2 retrieval. Other CO2 retrieval algorithms use chemical transport model result or climatologically expected values as aerosol information which is the main reason of low data availability. With

  1. A Patched-Grid Algorithm for Complex Configurations Directed Towards the F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Walters, Robert W.; Reu, Taekyu; Ghaffari, Farhad; Weston, Robert P.; Luckring, James M.

    1989-01-01

    A patched-grid algorithm for the analysis of complex configurations with an implicit, upwind-biased Navier-Stokes solver is presented. Results from both a spatial-flux and a time-flux conservation approach to patching across zonal boundaries are presented. A generalized coordinate transformation with a biquadratic geometric element is used at the zonal interface in order to treat highly stretched viscous grids and arbitrarily-shaped zonal boundaries. Applications are made to the F-18 forebody-strake configuration at subsonic, high-alpha conditions. Computed surface flow patterns compare well with ground-based and flight-test results; the large effect of Reynolds number on the forebody flow-field is shown.

  2. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  3. Analysis of the initial values in split-complex backpropagation algorithm.

    PubMed

    Yang, Sheng-Sung; Siu, Sammy; Ho, Chia-Lu

    2008-09-01

    When a multilayer perceptron (MLP) is trained with the split-complex backpropagation (SCBP) algorithm, one observes a relatively strong dependence of the performance on the initial values. For the effective adjustments of the weights and biases in SCBP, we propose that the range of the initial values should be greater than that of the adjustment quantities. This criterion can reduce the misadjustment of the weights and biases. Based on the this criterion, the suitable range of the initial values can be estimated. The results show that the suitable range of the initial values depends on the property of the used communication channel and the structure of the MLP (the number of layers and the number of nodes in each layer). The results are studied using the equalizer scenarios. The simulation results show that the estimated range of the initial values gives significantly improved performance. PMID:18779088

  4. Information processing using a single dynamical node as complex system

    PubMed Central

    Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.

    2011-01-01

    Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110

  5. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  6. A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.

    ERIC Educational Resources Information Center

    Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind

    1999-01-01

    Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)

  7. Comparison of different information content models by using two strategies: development of the best information algorithm for Iliad.

    PubMed Central

    Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.

    1992-01-01

    Iliad is a diagnostic expert system for internal medicine. Iliad's "best information" mode is used to determine the most cost-effective findings to pursue next at any stage of a work-up. The "best information" algorithm combines an information content calculation together with a cost factor. The calculations then provide a rank-ordering of the alternative patient findings according to cost-effectiveness. The authors evaluated five information content models under two different strategies. The first, the single-frame strategy, considers findings only within the context of each individual disease frame. The second, the across-frame strategy, considers the information that a single finding could provide across several diseases. The study found that (1) a version of Shannon's information model performed the best under both strategies---this finding confirms the result of a previous independent study, (2) the across-frame strategy was preferred over the single-frame strategy. PMID:1482918

  8. Link Prediction in Complex Networks: A Mutual Information Perspective

    PubMed Central

    Tan, Fei; Xia, Yongxiang; Zhu, Boyao

    2014-01-01

    Topological properties of networks are widely applied to study the link-prediction problem recently. Common Neighbors, for example, is a natural yet efficient framework. Many variants of Common Neighbors have been thus proposed to further boost the discriminative resolution of candidate links. In this paper, we reexamine the role of network topology in predicting missing links from the perspective of information theory, and present a practical approach based on the mutual information of network structures. It not only can improve the prediction accuracy substantially, but also experiences reasonable computing complexity. PMID:25207920

  9. Information processing in neural networks with the complex dynamic thresholds

    NASA Astrophysics Data System (ADS)

    Kirillov, S. Yu.; Nekorkin, V. I.

    2016-06-01

    A control mechanism of the information processing in neural networks is investigated, based on the complex dynamic threshold of the neural excitation. The threshold properties are controlled by the slowly varying synaptic current. The dynamic threshold shows high sensitivity to the rate of the synaptic current variation. It allows both to realize flexible selective tuning of the network elements and to provide nontrivial regimes of neural coding.

  10. Combining spatial and spectral information to improve crop/weed discrimination algorithms

    NASA Astrophysics Data System (ADS)

    Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.

    2012-01-01

    Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.

  11. A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy

    2011-04-01

    A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.

  12. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  13. The impact of reconstruction algorithms and time of flight information on PET/CT image quality

    PubMed Central

    Suljic, Alen; Tomse, Petra; Jensterle, Luka; Skrk, Damijan

    2015-01-01

    Background The aim of the study was to explore the influence of various time-of-flight (TOF) and non-TOF reconstruction algorithms on positron emission tomography/computer tomography (PET/CT) image quality. Materials and methods. Measurements were performed with a triple line source phantom, consisting of capillaries with internal diameter of ∼ 1 mm and standard Jaszczak phantom. Each of the data sets was reconstructed using analytical filtered back projection (FBP) algorithm, iterative ordered subsets expectation maximization (OSEM) algorithm (4 iterations, 24 subsets) and iterative True-X algorithm incorporating a specific point spread function (PSF) correction (4 iterations, 21 subsets). Baseline OSEM (2 iterations, 8 subsets) was included for comparison. Procedures were undertaken following the National Electrical Manufacturers Association (NEMA) NU-2-2001 protocol. Results Measurement of spatial resolution in full width at half maximum (FWHM) was 5.2 mm, 4.5 mm and 2.9 mm for FBP, OSEM and True-X; and 5.1 mm, 4.5 mm and 2.9 mm for FBP+TOF, OSEM+TOF and True-X+TOF respectively. Assessment of reconstructed Jaszczak images at different concentration ratios showed that incorporation of TOF information improves cold contrast, while hot contrast only slightly, however the most prominent improvement could be seen in background variability - noise reduction. Conclusions On the basis of the results of investigation we concluded, that incorporation of TOF information in reconstruction algorithm mostly affects reduction of the background variability (levels of noise in the image), while the improvement of spatial resolution due to incorporation of TOF information is negligible. Comparison of traditional and modern reconstruction algorithms showed that analytical FBP yields comparable results in some parameter measurements, such as cold contrast and relative count error. Iterative methods show highest levels of hot contrast, when TOF and PSF corrections were applied

  14. The algorithmic complexity of neural spike trains increases during focal seizures.

    PubMed

    Rapp, P E; Zimmerman, I D; Vining, E P; Cohen, N; Albano, A M; Jiménez-Montaño, M A

    1994-08-01

    The interspike interval spike trains of spontaneously active cortical neurons can display nonrandom internal structure. The degree of nonrandom structure can be quantified and was found to decrease during focal epileptic seizures. Greater statistical discrimination between the two physiological conditions (normal vs seizure) was obtained with measurements of context-free grammar complexity than by measures of the distribution of the interspike intervals such as the mean interval, its standard deviation, skewness, or kurtosis. An examination of fixed epoch data sets showed that two factors contribute to the complexity: the firing rate and the internal structure of the spike train. However, calculations with randomly shuffled surrogates of the original data sets showed that the complexity is not completely determined by the firing rate. The sequence-sensitive structure of the spike train is a significant contributor. By combining complexity measurements with statistically related surrogate data sets, it is possible to classify neurons according to the dynamical structure of their spike trains. This classification could not have been made on the basis of conventional distribution-determined measures. Computations with more sophisticated kinds of surrogate data show that the structure observed using complexity measures cannot be attributed to linearly correlated noise or to linearly correlated noise transformed by a static monotonic nonlinearity. The patterns in spike trains appear to reflect genuine nonlinear structure. The limitations of these results are also discussed. The results presented in this article do not, of themselves, establish the presence of a fine-structure encoding of neural information. PMID:8046447

  15. Estimating Diffusion Network Structures: Recovery Conditions, Sample Complexity & Soft-thresholding Algorithm

    PubMed Central

    Daneshmand, Hadi; Gomez-Rodriguez, Manuel; Song, Le; Schölkopf, Bernhard

    2015-01-01

    Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees? Despite the increasing availability of cascade-data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuous-time diffusion models using an ℓ1-regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe O(d3 log N) cascades, where d is the maximum number of parents of a node and N is the total number of nodes. Moreover, we develop a simple and efficient soft-thresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice. PMID:25932466

  16. Neural network and genetic algorithm technology in data mining of manufacturing quality information

    NASA Astrophysics Data System (ADS)

    Song, Limei; Qu, Xing-Hua; Ye, Shenghua

    2002-03-01

    Data Mining of Manufacturing Quality Information (MQI) is the key technology in Quality Lead Control. Of all the data mining methods, Neural Network and Genetic Algorithm is widely used for their strong advantages, such as non-linear, collateral, veracity etc. But if you singly use them, there will be some limitations preventing your research, such as convergence slowly, searching blindness etc. This paper combines their merits and use Genetic BP Algorithm in Data Mining of MQI. It has been successfully used in the key project of Natural Science Foundation of China (NSFC) - Quality Control and Zero-defect Engineering (Project No. 59735120).

  17. Information Driven Self-Organization of Complex Robotic Behaviors

    PubMed Central

    Martius, Georg; Der, Ralf; Ay, Nihat

    2013-01-01

    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well. PMID:23723979

  18. Information driven self-organization of complex robotic behaviors.

    PubMed

    Martius, Georg; Der, Ralf; Ay, Nihat

    2013-01-01

    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well. PMID:23723979

  19. Integrated computational and conceptual solutions for complex environmental information management

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2016-06-01

    This paper presents the recent results of the integration of computational and conceptual solutions for the complex case of environmental information management. The solution for the major goal of creating and developing long-term multi-disciplinary knowledge resources and conceptual and computational support was achieved by implementing and integrating key components. The key components are long-term knowledge resources providing required structures for universal knowledge creation, documentation, and preservation, universal multi-disciplinary and multi-lingual conceptual knowledge and classification, especially, references to Universal Decimal Classification (UDC), sustainable workflows for environmental information management, and computational support for dynamical use, processing, and advanced scientific computing with Integrated Information and Computing System (IICS) components and High End Computing (HEC) resources.

  20. Encoding techniques for complex information structures in connectionist systems

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.

  1. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro

    2010-12-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  2. Statistical physics of networks, information and complex systems

    SciTech Connect

    Ecke, Robert E

    2009-01-01

    In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.

  3. SNP Markers as Additional Information to Resolve Complex Kinship Cases

    PubMed Central

    Pontes, M. Lurdes; Fondevila, Manuel; Laréu, Maria Victoria; Medeiros, Rui

    2015-01-01

    Summary Background DNA profiling with sets of highly polymorphic autosomal short tandem repeat (STR) markers has been applied in various aspects of human identification in forensic casework for nearly 20 years. However, in some cases of complex kinship investigation, the information provided by the conventionally used STR markers is not enough, often resulting in low likelihood ratio (LR) calculations. In these cases, it becomes necessary to increment the number of loci under analysis to reach adequate LRs. Recently, it has been proposed that single nucleotide polymorphisms (SNPs) could be used as a supportive tool to STR typing, eventually even replacing the methods/markers now employed. Methods In this work, we describe the results obtained in 7 revised complex paternity cases when applying a battery of STRs, as well as 52 human identification SNPs (SNPforID 52plex identification panel) using a SNaPshot methodology followed by capillary electrophoresis. Results Our results show that the analysis of SNPs, as complement to STR typing in forensic casework applications, would at least increase by a factor of 4 total PI values and correspondent Essen-Möller's W value. Conclusions We demonstrated that SNP genotyping could be a key complement to STR information in challenging casework of disputed paternity, such as close relative individualization or complex pedigrees subject to endogamous relations. PMID:26733770

  4. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration under complex terrain

    NASA Astrophysics Data System (ADS)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.

    2010-07-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  5. I/O efficient algorithms and applications in geographic information systems

    NASA Astrophysics Data System (ADS)

    Danner, Andrew

    Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.

  6. Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.

    PubMed

    Bae, Juhee; Watson, Benjamin

    2014-12-01

    In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator. PMID:26356911

  7. FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks.

    PubMed

    Yang, Jing; Chen, Limin; Zhang, Jianpei

    2015-01-01

    It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification. PMID:26090857

  8. Information Geometry of Complex Hamiltonians and Exceptional Points

    NASA Astrophysics Data System (ADS)

    Brody, Dorje; Graefe, Eva-Maria

    2013-08-01

    Information geometry provides a tool to systematically investigate parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where dynamical properties of the system change abruptly. In the vicinities of the transition points, the state of the system becomes highly sensitive to the changes of the parameters in the Hamiltonian. The parameter sensitivity can then be measured in terms of the Fisher-Rao metric and the associated curvature of the parameter-space manifold. A general scheme for the geometric study of parameter-space manifolds of eigenstates of complex Hamiltonians is outlined here, leading to generic expressions for the metric.

  9. Tactile information processing in the trigeminal complex of the rat

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Tupitsyn, Anatoly N.; Makarov, Valery A.; Panetsos, Fivos; Moreno, Angel; Garcia-Gonzalez, Victor; Sanchez-Jimenez, Abel

    2007-02-01

    We study mechanisms of information processing in the principalis (Pr5), oralis (Sp5o) and interpolaris (Sp5i) nuclei of the trigeminal sensory complex of the rat under whisker stimulation by short air puffs. After the standard electrophysiological description of the neural spiking activity we apply a novel wavelet based method quantifying the structural stability of firing patterns evoked by a periodic whisker stimulation. We show that the response stability depends on the puff duration delivered to the vibrissae and differs among the analyzed nuclei. Pr5 and Sp5i exhibit the maximal stability to an intermediate stimulus duration, whereas Sp5o shows "preference" for short stimuli.

  10. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks.

    PubMed

    Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido

    2015-12-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks. PMID:26764762

  11. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks

    NASA Astrophysics Data System (ADS)

    Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido

    2015-12-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks.

  12. Some elements of mathematical information theory and total inversion algorithm applied to travel time inversion

    NASA Astrophysics Data System (ADS)

    Martínez, M. D.; Lana, X.

    1991-03-01

    The total inversion algorithm and some elements of Mathematical Information Theory are used in the treatment of travel-time data belonging to a seismic refraction experiment from the southern segment (Sardinia Channel) of the European Geotraverse Project. The inversion algorithm allows us to improve a preliminary propagating model obtained by means of usual trial and error procedure and to quantify the resolution degree of parameters defining the crust and upper mantle of such a model. Concepts related to Mathematical Information Theory detect some seismic profiles of the refraction experiment which give the most homogeneous coverage of the model in terms of number of trajectories crossing it. Finally, the efficiency of the inversion procedure is quantified and the uncertainties regarding knowledge of different parts of the model are also evaluated.

  13. Specificity, promiscuity, and the structure of complex information processing networks

    NASA Astrophysics Data System (ADS)

    Myers, Christopher

    2006-03-01

    Both the top-down designs of engineered systems and the bottom-up serendipities of biological evolution must negotiate tradeoffs between specificity and control: overly specific interactions between components can make systems brittle and unevolvable, while more generic interactions can require elaborate control in order to aggregate specificity from distributed pieces. Complex information processing systems reveal network organizations that navigate this landscape of constraints: regulatory and signaling networks in cells involve the coordination of molecular interactions that are surprisingly promiscuous, and object-oriented design in software systems emphasizes the polymorphic composition of objects of minimal necessary specificity [C.R. Myers, Phys Rev E 68, 046116 (2003)]. Models of information processing arising both in systems biology and engineered computation are explored to better understand how particular network organizations can coordinate the activity of promiscuous components to achieve robust and evolvable function.

  14. Management of complex immunogenetics information using an enhanced relational model.

    PubMed

    Barsalou, T; Sujansky, W; Herzenberg, L A; Wiederhold, G

    1991-10-01

    Flow cytometry has become a technique of paramount importance in the armamentarium of the scientist in such domains as immunogenetics. In the PENGUIN project, we are currently developing the architecture for an expert database system to facilitate the design of flow-cytometry experiments. This paper describes the core of this architecture--a methodology for managing complex biomedical information in an extended relational framework. More specifically, we exploit a semantic data model to enhance relational databases with structuring and manipulation tools that take more domain information into account and provide the user with an appropriate level of abstraction. We present specific applications of the structural model to database schema management, data retrieval and browsing, and integrity maintenance. PMID:1743006

  15. Constant Modulus Algorithm with Reduced Complexity Employing DFT Domain Fast Filtering

    NASA Astrophysics Data System (ADS)

    Yang, Yoon Gi; Lee, Chang Su; Yang, Soo Mi

    In this paper, a novel CMA (constant modulus algorithm) algorithm employing fast convolution in the DFT (discrete Fourier transform) domain is proposed. We propose a non-linear adaptation algorithm that minimizes CMA cost function in the DFT domain. The proposed algorithm is completely new one as compared to the recently introduced similar DFT domain CMA algorithm in that, the original CMA cost function has not been changed to develop DFT domain algorithm, resulting improved convergence properties. Using the proposed approach, we can reduce the number of multiplications to O(N log 2 N), whereas the conventional CMA has the computation order of O(N2). Simulation results show that the proposed algorithm provides a comparable performance to the conventional CMA.

  16. Teaching Problem Solving; the Effect of Algorithmic and Heuristic Problem Solving Training in Relation to Task Complexity and Relevant Aptitudes.

    ERIC Educational Resources Information Center

    de Leeuw, L.

    Sixty-four fifth and sixth-grade pupils were taught number series extrapolation by either an algorithm, fully prescribed problem-solving method or a heuristic, less prescribed method. The trained problems were within categories of two degrees of complexity. There were 16 subjects in each cell of the 2 by 2 design used. Aptitude Treatment…

  17. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  18. Information theoretic bounds of ATR algorithm performance for sidescan sonar target classification

    NASA Astrophysics Data System (ADS)

    Myers, Vincent L.; Pinto, Marc A.

    2005-05-01

    With research on autonomous underwater vehicles for minehunting beginning to focus on cooperative and adaptive behaviours, some effort is being spent on developing automatic target recognition (ATR) algorithms that are able to operate with high reliability under a wide range of scenarios, particularly in areas of high clutter density, and without human supervision. Because of the great diversity of pattern recognition methods and continuously improving sensor technology, there is an acute requirement for objective performance measures that are independent of any particular sensor, algorithm or target definitions. This paper approaches the ATR problem from the point of view of information theory in an attempt to place bounds on the performance of target classification algorithms that are based on the acoustic shadow of proud targets. Performance is bounded by analysing the simplest of shape classification tasks, that of differentiating between a circular and square shadow, thus allowing us to isolate system design criteria and assess their effect on the overall probability of classification. The information that can be used for target recognition in sidescan sonar imagery is examined and common information theory relationships are used to derive properties of the ATR problem. Some common bounds with analytical solutions are also derived.

  19. Information theoretic discrepancy-based iterative reconstruction (IDIR) algorithm for limited angle tomography

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Lee, Jongha; Lee, Kangui; Sung, Younghun; Lee, SeungDeok

    2012-03-01

    The X-ray tomosynthesis that measures several low dose projections over a limited angular range has been investigated as an alternative method of X-ray mammography for breast cancer screening. An extension of the scan coverage increases the vertical resolution by mitigating the interplane blurring. The implementation of a wide angle tomosynthesis equipment, however, may not be straightforward, mainly due to the image deterioration from the statistical noise in exterior projections. In this paper, we adopt the voltage modulation scheme to enlarge the coverage of the tomosynthesis scan. The higher tube voltages are used for outer angles, which offers the sufficient penetrating power for outlying frames in which the pathway of X-ray photons is elongated. To reconstruct 3D information from voltage modulated projections, we propose a novel algorithm, named information theoretic discrepancy based iterative reconstruction (IDIR) algorithm, which allows to account for the polychromatic acquisition model. The generalized information theoretic discrepancy (GID) is newly employed as the objective function. Using particular features of the GID, the cost function is derived in terms of imaginary variables with energy dependency, which leads to a tractable optimization problem without using the monochromatic approximation. In preliminary experiments using simulated and experimental equipment, the proposed imaging architecture and IDIR algorithm showed superior performances over conventional approaches.

  20. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    SciTech Connect

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  1. An Effective Tri-Clustering Algorithm Combining Expression Data with Gene Regulation Information

    PubMed Central

    Li, Ao; Tuck, David

    2009-01-01

    Motivation Bi-clustering algorithms aim to identify sets of genes sharing similar expression patterns across a subset of conditions. However direct interpretation or prediction of gene regulatory mechanisms may be difficult as only gene expression data is used. Information about gene regulators may also be available, most commonly about which transcription factors may bind to the promoter region and thus control the expression level of a gene. Thus a method to integrate gene expression and gene regulation information is desirable for clustering and analyzing. Methods By incorporating gene regulatory information with gene expression data, we define regulated expression values (REV) as indicators of how a gene is regulated by a specific factor. Existing bi-clustering methods are extended to a three dimensional data space by developing a heuristic TRI-Clustering algorithm. An additional approach named Automatic Boundary Searching algorithm (ABS) is introduced to automatically determine the boundary threshold. Results Results based on incorporating ChIP-chip data representing transcription factor-gene interactions show that the algorithms are efficient and robust for detecting tri-clusters. Detailed analysis of the tri-cluster extracted from yeast sporulation REV data shows genes in this cluster exhibited significant differences during the middle and late stages. The implicated regulatory network was then reconstructed for further study of defined regulatory mechanisms. Topological and statistical analysis of this network demonstrated evidence of significant changes of TF activities during the different stages of yeast sporulation, and suggests this approach might be a general way to study regulatory networks undergoing transformations. PMID:19838334

  2. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  3. BRAIN 2.0: Time and Memory Complexity Improvements in the Algorithm for Calculating the Isotope Distribution

    NASA Astrophysics Data System (ADS)

    Dittwald, Piotr; Valkenborg, Dirk

    2014-04-01

    Recently, an elegant iterative algorithm called BRAIN ( Baffling Recursive Algorithm for Isotopic distributio N calculations) was presented. The algorithm is based on the classic polynomial method for calculating aggregated isotope distributions, and it introduces algebraic identities using Newton-Girard and Viète's formulae to solve the problem of polynomial expansion. Due to the iterative nature of the BRAIN method, it is a requirement that the calculations start from the lightest isotope variant. As such, the complexity of BRAIN scales quadratically with the mass of the putative molecule, since it depends on the number of aggregated peaks that need to be calculated. In this manuscript, we suggest two improvements of the algorithm to decrease both time and memory complexity in obtaining the aggregated isotope distribution. We also illustrate a concept to represent the element isotope distribution in a generic manner. This representation allows for omitting the root calculation of the element polynomial required in the original BRAIN method. A generic formulation for the roots is of special interest for higher order element polynomials such that root finding algorithms and its inaccuracies can be avoided.

  4. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGESBeta

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  5. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    SciTech Connect

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information

  6. Parallel training and testing methods for complex image processing algorithms on distributed, heterogeneous, unreliable, and non-dedicated resources

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.

    2011-01-01

    Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.

  7. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V., II; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  8. The LeFE algorithm: embracing the complexity of gene expression in the interpretation of microarray data.

    PubMed

    Eichler, Gabriel S; Reimers, Mark; Kane, David; Weinstein, John N

    2007-01-01

    Interpretation of microarray data remains a challenge, and most methods fail to consider the complex, nonlinear regulation of gene expression. To address that limitation, we introduce Learner of Functional Enrichment (LeFE), a statistical/machine learning algorithm based on Random Forest, and demonstrate it on several diverse datasets: smoker/never smoker, breast cancer classification, and cancer drug sensitivity. We also compare it with previously published algorithms, including Gene Set Enrichment Analysis. LeFE regularly identifies statistically significant functional themes consistent with known biology. PMID:17845722

  9. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  10. Bearing fault component identification using information gain and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Vinay, Vakharia; Kumar, Gupta Vijay; Kumar, Kankar Pavan

    2015-04-01

    In the present study an attempt has been made to identify various bearing faults using machine learning algorithm. Vibration signals obtained from faults in inner race, outer race, rolling element and combined faults are considered. Raw vibration signal cannot be used directly since vibration signals are masked by noise. To overcome this difficulty combined time frequency domain method such as wavelet transform is used. Further wavelet selection criteria based on minimum permutation entropy is employed to select most appropriate base wavelet. Statistical features from selected wavelet coefficients are calculated to form feature vector. To reduce size of feature vector information gain attribute selection method is employed. Modified feature set is fed in to machine learning algorithm such as random forest and self-organizing map for getting maximize fault identification efficiency. Results obtained revealed that attribute selection method shows improvement in fault identification accuracy of bearing components.

  11. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  12. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.

    PubMed

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  13. Efficiency of informational transfer in regular and complex networks

    NASA Astrophysics Data System (ADS)

    Vragović, I.; Louis, E.; Díaz-Guilera, A.

    2005-03-01

    We analyze the process of informational exchange through complex networks by measuring network efficiencies. Aiming to study nonclustered systems, we propose a modification of this measure on the local level. We apply this method to an extension of the class of small worlds that includes declustered networks and show that they are locally quite efficient, although their clustering coefficient is practically zero. Unweighted systems with small-world and scale-free topologies are shown to be both globally and locally efficient. Our method is also applied to characterize weighted networks. In particular we examine the properties of underground transportation systems of Madrid and Barcelona and reinterpret the results obtained for the Boston subway network.

  14. A tool for filtering information in complex systems

    NASA Astrophysics Data System (ADS)

    Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.

    2005-07-01

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.

  15. A tool for filtering information in complex systems.

    PubMed

    Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N

    2005-07-26

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373

  16. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  17. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  18. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  19. Enabling complex queries to drug information sources through functional composition.

    PubMed

    Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier

    2013-01-01

    Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs. PMID:23920645

  20. Enabling Complex Queries to Drug Information Sources through Functional Composition

    PubMed Central

    Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier

    2015-01-01

    Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs. PMID:23920645

  1. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  2. An algorithmic and information-theoretic approach to multimetric index construction

    USGS Publications Warehouse

    Schoolmaster, Donald R., Jr.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.

    2013-01-01

    The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified

  3. School Mathematics Study Group, Unit Number Two. Chapter 3 - Informal Algorithms and Flow Charts. Chapter 4 - Applications and Mathematics Models.

    ERIC Educational Resources Information Center

    Stanford Univ., CA. School Mathematics Study Group.

    This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…

  4. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  5. Multiple expression of molecular information: enforced generation of different supramolecular inorganic architectures by processing of the same ligand information through specific coordination algorithms

    PubMed

    Funeriu; Lehn; Fromm; Fenske

    2000-06-16

    The multisubunit ligand 2 combines two complexation substructures known to undergo, with specific metal ions, distinct self-assembly processes to form a double-helical and a grid-type structure, respectively. The binding information contained in this molecular strand may be expected to generate, in a strictly predetermined and univocal fashion, two different, well-defined output inorganic architectures depending on the set of metal ions, that is, on the coordination algorithm used. Indeed, as predicted, the self-assembly of 2 with eight CuII and four CuI yields the intertwined structure D1. It results from a crossover of the two assembly subprograms and has been fully characterized by crystal structure determination. On the other hand, when the instructions of strand 2 are read out with a set of eight CuI and four MII (M = Fe, Co, Ni, Cu) ions, the architectures C1-C4, resulting from a linear combination of the two subprograms, are obtained, as indicated by the available physico-chemical and spectral data. Redox interconversion of D1 and C4 has been achieved. These results indicate that the same molecular information may yield different output structures depending on how it is processed, that is, depending on the interactional (coordination) algorithm used to read it. They have wide implications for the design and implementation of programmed chemical systems, pointing towards multiprocessing capacity, in a one code/ several outputs scheme, of potential significance for molecular computation processes and possibly even with respect to information processing in biology. PMID:10926214

  6. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record: information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  7. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

    PubMed

    Devine, Sean D

    2016-02-01

    Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. PMID:26723233

  8. Automatically Extracting Information Needs from Complex Clinical Questions

    PubMed Central

    Cao, Yong-gang; Cimino, James J; Ely, John; Yu, Hong

    2010-01-01

    Objective Clinicians pose complex clinical questions when seeing patients, and identifying the answers to those questions in a timely manner helps improve the quality of patient care. We report here on two natural language processing models, namely, automatic topic assignment and keyword identification, that together automatically and effectively extract information needs from ad hoc clinical questions. Our study is motivated in the context of developing the larger clinical question answering system AskHERMES (Help clinicians to Extract and aRrticulate Multimedia information for answering clinical quEstionS). Design and Measurements We developed supervised machine-learning systems to automatically assign predefined general categories (e.g., etiology, procedure, and diagnosis) to a question. We also explored both supervised and unsupervised systems to automatically identify keywords that capture the main content of the question. Results We evaluated our systems on 4,654 annotated clinical questions that were collected in practice. We achieved an F1 score of 76.0% for the task of general topic classification and 58.0% for keyword extraction. Our systems have been implemented into the larger question answering system AskHERMES. Our error analyses suggested that inconsistent annotation in our training data have hurt both question analysis tasks. Conclusion Our systems, available at http://www.askhermes.org, can automatically extract information needs from both short (the number of word tokens <20) and long questions (the number of word tokens >20), and from both well-structured and ill-formed questions. We speculate that the performance of general topic classification and keyword extraction can be further improved if consistently annotated data are made available. PMID:20670693

  9. Ant colony optimization image registration algorithm based on wavelet transform and mutual information

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Sun, Yanfeng; Zhai, Bing; Wang, Yiding

    2013-07-01

    This paper studies on the image registration of the medical images. Wavelet transform is adopted to decompose the medical images because the resolution of the medical image is high and the computational amount of the registration is large. Firstly, the low frequency sub-images are matched. Then source images are matched. The image registration was fulfilled by the ant colony optimization algorithm to search the extremum of the mutual information. The experiment result demonstrates the proposed approach can not only reduce calculation amount, but also skip from the local extremum during optimization process, and search the optimization value.

  10. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  11. A Genetic Algorithm Tool (splicer) for Complex Scheduling Problems and the Space Station Freedom Resupply Problem

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Valenzuela-Rendon, Manuel

    1993-01-01

    The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.

  12. An information theory analysis of visual complexity and dissimilarity.

    PubMed

    Donderi, Don C

    2006-01-01

    The subjective complexity of a computer-generated bitmap image can be measured by magnitude estimation scaling, and its objective complexity can be measured by its compressed file size. There is a high correlation between these measures of subjective and objective complexity over a large set of marine electronic chart and radar images. The subjective dissimilarity of a pair of bitmap images can be predicted from subjective and objective measures of the complexity of each image, and from the subjective and objective complexity of the image produced by overlaying the two simple images. In addition, the subjective complexity of the image produced by overlaying two simple images can be predicted from the subjective complexity of the simple images and the subjective dissimilarity of the image pair. The results of the experiments that generated these complexity and dissimilarity judgments are consistent with a theory, outlined here, that treats objective and subjective measures of image complexity and dissimilarity as vectors in Euclidean space. PMID:16836047

  13. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    PubMed

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR). PMID:24369664

  14. Network algorithmics and the emergence of information integration in cortical models

    NASA Astrophysics Data System (ADS)

    Nathan, Andre; Barbosa, Valmir C.

    2011-07-01

    An information-theoretic framework known as integrated information theory (IIT) has been introduced recently for the study of the emergence of consciousness in the brain [D. Balduzzi and G. Tononi, PLoS Comput. Biol.1553-734X10.1371/journal.pcbi.1000091 4, e1000091 (2008)]. IIT purports that this phenomenon is to be equated with the generation of information by the brain surpassing the information that the brain’s constituents already generate independently of one another. IIT is not fully plausible in its modeling assumptions nor is it testable due to severe combinatorial growth embedded in its key definitions. Here we introduce an alternative to IIT which, while inspired in similar information-theoretic principles, seeks to address some of IIT’s shortcomings to some extent. Our alternative framework uses the same network-algorithmic cortical model we introduced earlier [A. Nathan and V. C. Barbosa, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.021916 81, 021916 (2010)] and, to allow for somewhat improved testability relative to IIT, adopts the well-known notions of information gain and total correlation applied to a set of variables representing the reachability of neurons by messages in the model’s dynamics. We argue that these two quantities relate to each other in such a way that can be used to quantify the system’s efficiency in generating information beyond that which does not depend on integration. We give computational results on our cortical model and on variants thereof that are either structurally random in the sense of an Erdős-Rényi random directed graph or structurally deterministic. We have found that our cortical model stands out with respect to the others in the sense that many of its instances are capable of integrating information more efficiently than most of those others’ instances.

  15. High-order algorithms for compressible reacting flow with complex chemistry

    NASA Astrophysics Data System (ADS)

    Emmett, Matthew; Zhang, Weiqun; Bell, John B.

    2014-05-01

    In this paper we describe a numerical algorithm for integrating the multicomponent, reacting, compressible Navier-Stokes equations, targeted for direct numerical simulation of combustion phenomena. The algorithm addresses two shortcomings of previous methods. First, it incorporates an eighth-order narrow stencil approximation of diffusive terms that reduces the communication compared to existing methods and removes the need to use a filtering algorithm to remove Nyquist frequency oscillations that are not damped with traditional approaches. The methodology also incorporates a multirate temporal integration strategy that provides an efficient mechanism for treating chemical mechanisms that are stiff relative to fluid dynamical time-scales. The overall methodology is eighth order in space with options for fourth order to eighth order in time. The implementation uses a hybrid programming model designed for effective utilisation of many-core architectures. We present numerical results demonstrating the convergence properties of the algorithm with realistic chemical kinetics and illustrating its performance characteristics. We also present a validation example showing that the algorithm matches detailed results obtained with an established low Mach number solver.

  16. Phase Retrieval from Modulus Using Homeomorphic Signal Processing and the Complex Cepstrum: An Algorithm for Lightning Protection Systems

    SciTech Connect

    Clark, G A

    2004-06-08

    In general, the Phase Retrieval from Modulus problem is very difficult. In this report, we solve the difficult, but somewhat more tractable case in which we constrain the solution to a minimum phase reconstruction. We exploit the real-and imaginary part sufficiency properties of the Fourier and Hilbert Transforms of causal sequences to develop an algorithm for reconstructing spectral phase given only spectral modulus. The algorithm uses homeomorphic signal processing methods with the complex cepstrum. The formal problem of interest is: Given measurements of only the modulus {vert_bar}H(k){vert_bar} (no phase) of the Discrete Fourier Transform (DFT) of a real, finite-length, stable, causal time domain signal h(n), compute a minimum phase reconstruction {cflx h}(n) of the signal. Then compute the phase of {cflx h}(n) using a DFT, and exploit the result as an estimate of the phase of h(n). The development of the algorithm is quite involved, but the final algorithm and its implementation are very simple. This work was motivated by a Phase Retrieval from Modulus Problem that arose in LLNL Defense Sciences Engineering Division (DSED) projects in lightning protection for buildings. The measurements are limited to modulus-only spectra from a spectrum analyzer. However, it is desired to perform system identification on the building to compute impulse responses and transfer functions that describe the amount of lightning energy that will be transferred from the outside of the building to the inside. This calculation requires knowledge of the entire signals (both modulus and phase). The algorithm and software described in this report are proposed as an approach to phase retrieval that can be used for programmatic needs. This report presents a brief tutorial description of the mathematical problem and the derivation of the phase retrieval algorithm. The efficacy of the theory is demonstrated using simulated signals that meet the assumptions of the algorithm. We see that for

  17. A moments-based algorithm for optimizing the information mined in post-processing spray images

    NASA Astrophysics Data System (ADS)

    Tan, Z. P.; Zinn, B. T.; Lubarsky, E.; Bibik, O.; Shcherbik, D.; Shen, L.

    2016-02-01

    The Moments-algorithm was developed to post-process images of sprays with the aim of characterizing the sprays' complex features (e.g., trajectory, dispersions and dynamics) in terms of simple curves, which can be used for developing correlation models and design tools. To achieve this objective, the algorithm calculates the first moments of pixel intensity values in instantaneous images of the spray to determine its center-of-gravity ( CG) trajectory (i.e., the spray density-weighted centerline trajectory). Thereafter, the second moments (i.e., standard-deviations, σ) of intensities are calculated to describe the dispersion of spray materials around the CG. After the instantaneous CG's and σ's for the instantaneous images have been obtained, they are arithmetically averaged to produce the average spray trajectories and dispersions. Additionally, the second moments of instantaneous CG's are used to characterize the spray's fluctuation magnitude. The Moments-algorithm has three main advantages over threshold-based edge-tracking and other conventional post-processing approaches: (1) It simultaneously describes the spray's instantaneous and average trajectories, dispersions and fluctuations, instead of just the outer/inner-edges, (2) the use of moments to define these spray characteristics is more physically meaningful because they reflect the statistical distribution of droplets within the spray plume instead of relying on an artificially interpreted "edge", and (3) the use of moments decreases the uncertainties of the post-processed results because moments are mathematically defined and do not depend upon user-adjustments/interpretations.

  18. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Astrophysics Data System (ADS)

    Stoughton, John W.; Mielke, Roland R.

    1988-02-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  19. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  20. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  1. Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits

    PubMed Central

    Ljungberg, Kajsa; Mishchenko, Kateryna; Holmgren, Sverker

    2010-01-01

    We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called quantitative trait loci (QTL), and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms. PMID:21918629

  2. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  3. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  4. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-10-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.

  5. Is increasing complexity of algorithms the price for higher accuracy? virtual comparison of three algorithms for tertiary level management of chronic cough in people living with HIV in a low-income country

    PubMed Central

    2012-01-01

    Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK) in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF) and the World Health Organisation (WHO). Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21%) and pneumocystosis (19%). Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm) are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice. PMID:22260242

  6. Prediction of Antimicrobial Peptides Based on Sequence Alignment and Support Vector Machine-Pairwise Algorithm Utilizing LZ-Complexity

    PubMed Central

    Shahrudin, Shahriza

    2015-01-01

    This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs) which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM-) LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity. PMID:25802839

  7. The H0 function, a new index for detecting structural/topological complexity information in undirected graphs

    NASA Astrophysics Data System (ADS)

    Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco

    2016-04-01

    Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.

  8. Novel classification method for remote sensing images based on information entropy discretization algorithm and vector space model

    NASA Astrophysics Data System (ADS)

    Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei

    2016-04-01

    Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.

  9. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems

    PubMed Central

    2012-01-01

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms. PMID:22413926

  10. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  11. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    SciTech Connect

    Hughett, P W

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  12. A tomographic algorithm to determine tip-tilt information from laser guide stars

    NASA Astrophysics Data System (ADS)

    Reeves, A. P.; Morris, T. J.; Myers, R. M.; Bharmal, N. A.; Osborn, J.

    2016-06-01

    Laser Guide Stars (LGS) have greatly increased the sky-coverage of Adaptive Optics (AO) systems. Due to the up-link turbulence experienced by LGSs, a Natural Guide Star (NGS) is still required, preventing full sky-coverage. We present a method of obtaining partial tip-tilt information from LGSs alone in multi-LGS tomographic LGS AO systems. The method of LGS up-link tip-tilt determination is derived using a geometric approach, then an alteration to the Learn and Apply algorithm for tomographic AO is made to accommodate up-link tip-tilt. Simulation results are presented, verifying that the technique shows good performance in correcting high altitude tip-tilt, but not that from low altitudes. We suggest that the method is combined with multiple far off-axis tip-tilt NGSs to provide gains in performance and sky-coverage over current tomographic AO systems.

  13. A NEW FRAMEWORK FOR URBAN SUSTAINABILITY ASSESSMENTS: LINKING COMPLEXITY, INFORMATION AND POLICY

    EPA Science Inventory

    Urban systems emerge as distinct entities from the complex interactions among social, economic and cultural attributes, and information, energy and material stocks and flows that operate on different temporal and spatial scales. Such complexity poses a challenge to identify the...

  14. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  15. Theoretical study on the structures of ethanolamine and its water complexes using the Hamiltonian algorithm

    NASA Astrophysics Data System (ADS)

    Teramae, Hiroyuki; Maruo, Yasuko Y.

    2015-12-01

    We try to optimize the structures of monoethanolamine (MEA), MEA dimer, MEA + two water molecules, and MEA dimer + four water molecules as the model of MEA in aqueous solutions using the Hamiltonian algorithm. We found the most stable MEA backbones are all gauche structures. The MEA in aqueous solution seems to exist as dimer or larger aggregates. As the base, the water molecule would be more important than another MEA because of the hydrogen bond networks.

  16. Detection processing of complex beam-former output data: a new dispersion-based reconditioning algorithm

    NASA Astrophysics Data System (ADS)

    McDonald, Robert J.; Wilbur, JoEllen

    1996-05-01

    Detection processing of the Toroidal Volume Search Sonar beamformer output prior to image formation is used to increase the signal-to-reverberation. The energy detector and sliding matched filter perform adequately at close range but degrade considerably when the reverberation begins to dominate. The skewness matched filter offers some improvement. A dispersion based reconditioning algorithm, introduced in this paper, is shown to provide considerably improvement in the signal-to-reverberation at far range.

  17. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  18. Application of Fisher Information to Complex Dynamic Systems (Tucson)

    EPA Science Inventory

    Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...

  19. Application of Fisher Information to Complex Dynamic Systems

    EPA Science Inventory

    Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...

  20. SIPPI: A Matlab toolbox for sampling the solution to inverse problems with complex prior information. Part 1—Methodology

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas; Skou Cordua, Knud; Caroline Looms, Majken; Mosegaard, Klaus

    2013-03-01

    From a probabilistic point-of-view, the solution to an inverse problem can be seen as a combination of independent states of information quantified by probability density functions. Typically, these states of information are provided by a set of observed data and some a priori information on the solution. The combined states of information (i.e. the solution to the inverse problem) is a probability density function typically referred to as the a posteriori probability density function. We present a generic toolbox for Matlab and Gnu Octave called SIPPI that implements a number of methods for solving such probabilistically formulated inverse problems by sampling the a posteriori probability density function. In order to describe the a priori probability density function, we consider both simple Gaussian models and more complex (and realistic) a priori models based on higher order statistics. These a priori models can be used with both linear and non-linear inverse problems. For linear inverse Gaussian problems we make use of least-squares and kriging-based methods to describe the a posteriori probability density function directly. For general non-linear (i.e. non-Gaussian) inverse problems, we make use of the extended Metropolis algorithm to sample the a posteriori probability density function. Together with the extended Metropolis algorithm, we use sequential Gibbs sampling that allow computationally efficient sampling of complex a priori models. The toolbox can be applied to any inverse problem as long as a way of solving the forward problem is provided. Here we demonstrate the methods and algorithms available in SIPPI. An application of SIPPI, to a tomographic cross borehole inverse problems, is presented in a second part of this paper.

  1. New Algorithm for Extracting Motion Information from PROPELLER Data and Head Motion Correction in T1-Weighted MRI.

    PubMed

    Feng, Yanqiu; Chen, Wufan

    2005-01-01

    PROPELLER (Periodically Rotated Overlapping ParallEl Lines with Enhanced Reconstruction) MRI, proposed by J. G. Pipe [1], offers a novel and effective means for compensating motion. For the reconstruction of PROPLLER data, algorithms to reliably and accurately extract inter-strip motion from data in central overlapped area are crucial to motion artifacts suppression. When implemented on T1-weighted MR data, the reconstruction algorithm, with motion estimated by registration based on maximizing correlation energy in frequency domain (CF), produces images with low quality due to the inaccurate estimation of motion. In this paper, a new algorithm is proposed for motion estimation based on the registration by maximizing mutual information in spatial domain (MIS). Furthermore, the optimization process is initialized by CF algorithm, so the algorithm is abbreviated as CF-MIS algorithm in this paper. With phantom and in vivo MR imaging, the CF-MIS algorithm was shown to be of higher accuracy in rotation estimation than CF algorithm. Consequently, the head motion in T1-weighted PROPELLER MRI was better corrected. PMID:17282454

  2. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process. PMID:25666087

  3. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  4. Problems in processing multizonal video information at specialized complexes

    NASA Technical Reports Server (NTRS)

    Shamis, V. A.

    1979-01-01

    Architectural requirements of a minicomputer-based specialized complex for automated digital analysis of multizonal video data are examined. The logic structure of multizonal video data and the complex mathematical provision required for the analysis of such data are described. The composition of the specialized complex, its operating system, and the required set of peripheral devices are discussed. It is noted that although much of the analysis can be automated, the operator-computer dialog mode is essential for certain stages of the analysis.

  5. Developing Ocean Color Remote Sensing Algorithms for Retrieving Optical Properties and Biogeochemical Parameters in the Optically Complex Waters of Long Island Sound

    NASA Astrophysics Data System (ADS)

    Aurin, Dirk Alexander

    2011-12-01

    The optical properties of the sea determine how light penetrates to depth, interacts with water-borne constituents, and re-emerges as scattered rays. By inversion, quantifying change in the spectral light field as it reflects from the sea unlocks information about the water's optical properties, which can then be used to quantify the suspended and dissolved biogeochemical constituents in the water. Retrieving bio-optical properties is relatively straightforward for the open ocean where phytoplankton-derived materials dominate ocean color. In contrast, the presence of land-derived material contributes significantly to the optical signature of nearshore waters, making the development of ocean color algorithms considerably more challenging. A hypothesis of this research is that characterization of the spectral nature of bio-optical properties in these optically complex waters facilitates optimization of semi-analytical algorithms for retrieving these properties. The main goal of this research is to develop an ocean color remote sensing algorithm for the highly turbid, estuarine waters of Long Island Sound (LIS) Bio-optical data collected in LIS showed it to be strongly influenced by the surrounding watershed and characterized by exceptionally high absorption associated with phytoplankton, non-algal particulate material, and chromophoric dissolved material compared to other coastal environments world-wide. Variability in the magnitudes of inherent optical properties, IOPs (e.g. absorption, scattering and attenuation coefficients), is explained by local influences such as major river outflows, as well as seasonal changes. Nevertheless, ocean color parameters describing the spectral shape of IOPs---parameters to which algorithms optimization is sensitive---are fairly constant across the region, possibly a result of the homogenizing influence of vigorous tidal and subtidal mixing or relative regional homogeneity in the biogeochemical nature of terrigenous material. Field

  6. CETIS: COMPLEX EFFLUENTS TOXICITY INFORMATION SYSTEM. DATA ENCODING GUIDELINES AND PROCEDURES

    EPA Science Inventory

    The computerized Complex Effluent Toxicity Information System (CETIS) data base includes data extracted from aquatic bioassay reprints as well as facility and receiving water information. Data references are obtained from both published papers and from unpublished results of test...

  7. Algorithmic information content, Church-Turing thesis, physical entropy, and Maxwell's demon

    SciTech Connect

    Zurek, W.H.

    1990-01-01

    Measurements convert alternative possibilities of its potential outcomes into the definiteness of the record'' -- data describing the actual outcome. The resulting decrease of statistical entropy has been, since the inception of the Maxwell's demon, regarded as a threat to the second law of thermodynamics. For, when the statistical entropy is employed as the measure of the useful work which can be extracted from the system, its decrease by the information gathering actions of the observer would lead one to believe that, at least from the observer's viewpoint, the second law can be violated. I show that the decrease of ignorance does not necessarily lead to the lowering of disorder of the measured physical system. Measurements can only convert uncertainty (quantified by the statistical entropy) into randomness of the outcome (given by the algorithmic information content of the data). The ability to extract useful work is measured by physical entropy, which is equal to the sum of these two measures of disorder. So defined physical entropy is, on the average, constant in course of the measurements carried out by the observer on an equilibrium system. 27 refs., 6 figs.

  8. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry.

    PubMed

    Beyer, Hans-Georg

    2014-01-01

    The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered. PMID:24922548

  9. A genetic algorithm encoded with the structural information of amino acids and dipeptides for efficient conformational searches of oligopeptides.

    PubMed

    Ru, Xiao; Song, Ce; Lin, Zijing

    2016-05-15

    The genetic algorithm (GA) is an intelligent approach for finding minima in a highly dimensional parametric space. However, the success of GA searches for low energy conformations of biomolecules is rather limited so far. Herein an improved GA scheme is proposed for the conformational search of oligopeptides. A systematic analysis of the backbone dihedral angles of conformations of amino acids (AAs) and dipeptides is performed. The structural information is used to design a new encoding scheme to improve the efficiency of GA search. Local geometry optimizations based on the energy calculations by the density functional theory are employed to safeguard the quality and reliability of the GA structures. The GA scheme is applied to the conformational searches of Lys, Arg, Met-Gly, Lys-Gly, and Phe-Gly-Gly representative of AAs, dipeptides, and tripeptides with complicated side chains. Comparison with the best literature results shows that the new GA method is both highly efficient and reliable by providing the most complete set of the low energy conformations. Moreover, the computational cost of the GA method increases only moderately with the complexity of the molecule. The GA scheme is valuable for the study of the conformations and properties of oligopeptides. © 2016 Wiley Periodicals, Inc. PMID:26833761

  10. A multilevel ant colony optimization algorithm for classical and isothermic DNA sequencing by hybridization with multiplicity information available.

    PubMed

    Kwarciak, Kamil; Radom, Marcin; Formanowicz, Piotr

    2016-04-01

    The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called "one and many" assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called "one, two and many", one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip. PMID:26878124

  11. MOEPGA: A novel method to detect protein complexes in yeast protein-protein interaction networks based on MultiObjective Evolutionary Programming Genetic Algorithm.

    PubMed

    Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan

    2015-10-01

    The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. PMID:26298638

  12. GENETIC ALGORITHMS FOR DECIPHERING THE COMPLEX CHEMOSENSORY CODE OF SOCIAL INSECTS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chemical communication among social insects is often studied with chromatographic methods. The data generated in such studies may be complex and require pattern recognition techniques for interpretation. We present the analysis of gas chromatographic profiles of hydrocarbon extracts obtained from t...

  13. Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.

    1997-01-01

    Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…

  14. Complex document information processing: prototype, test collection, and evaluation

    NASA Astrophysics Data System (ADS)

    Agam, G.; Argamon, S.; Frieder, O.; Grossman, D.; Lewis, D.

    2006-01-01

    Analysis of large collections of complex documents is an increasingly important need for numerous applications. Complex documents are documents that typically start out on paper and are then electronically scanned. These documents have rich internal structure and might only be available in image form. Additionally, they may have been produced by a combination of printing technologies (or by handwriting); and include diagrams, graphics, tables and other non-textual elements. The state of the art today for a large document collection is essentially text search of OCR'd documents with no meaningful use of data found in images, signatures, logos, etc. Our prototype automatically generates rich metadata about a complex document and then applies query tools to integrate the metadata with text search. To ensure a thorough evaluation of the effectiveness of our prototype, we are also developing a roughly 42,000,000 page complex document test collection. The collection will include relevance judgments for queries at a variety of levels of detail and depending on a variety of content and structural characteristics of documents, as well as "known item" queries looking for particular documents.

  15. Solving hard computational problems efficiently: asymptotic parametric complexity 3-coloring algorithm.

    PubMed

    Martín H, José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  16. Scale effects on information content and complexity of streamflows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Understanding temporal and spatial variations of streamflows is important for flood forecasting, water resources management, and revealing interactions between hydrologic processes (e.g., precipitation, evapotranspiration, and soil water and groundwater flows.) The information theory has been used i...

  17. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  18. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  19. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS

    PubMed Central

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  20. [The High Precision Analysis Research of Multichannel BOTDR Scattering Spectral Information Based on the TTDF and CNS Algorithm].

    PubMed

    Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2015-07-01

    Traditional BOTDR optical fiber sensing system uses single channel sensing fiber to measure the information features. Uncontrolled factors such as cross-sensitivity can lead to a lower scattering spectrum fitting precision and make the information analysis deflection get worse. Therefore, a BOTDR system for detecting the multichannel sensor information at the same time is proposed. Also it provides a scattering spectrum analysis method for multichannel Brillouin optical time-domain reflection (BOT-DR) sensing system in order to extract high precision spectrum feature. This method combines the three times data fusion (TTDF) and the cuckoo Newton search (CNS) algorithm. First, according to the rule of Dixon and Grubbs criteria, the method uses the ability of TTDF algorithm in data fusion to eliminate the influence of abnormal value and reduce the error signal. Second, it uses the Cuckoo Newton search algorithm to improve the spectrum fitting and enhance the accuracy of Brillouin scattering spectrum information analysis. We can obtain the global optimal solution by smart cuckoo search. By using the optimal solution as the initial value of Newton algorithm for local optimization, it can ensure the spectrum fitting precision. The information extraction at different linewidths is analyzed in temperature information scattering spectrum under the condition of linear weight ratio of 1:9. The variances of the multichannel data fusion is about 0.0030, the center frequency of scattering spectrum is 11.213 GHz and the temperature error is less than 0.15 K. Theoretical analysis and simulation results show that the algorithm can be used in multichannel distributed optical fiber sensing system based on Brillouin optical time domain reflection. It can improve the accuracy of multichannel sensing signals and the precision of Brillouin scattering spectrum analysis effectively. PMID:26717729

  1. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction.

    PubMed

    Chun, Se Young

    2016-03-01

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855

  2. A Novel Square-Root Cubature Information Weighted Consensus Filter Algorithm for Multi-Target Tracking in Distributed Camera Networks

    PubMed Central

    Chen, Yanming; Zhao, Qingjie

    2015-01-01

    This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338

  3. A novel square-root cubature information weighted consensus filter algorithm for multi-target tracking in distributed camera networks.

    PubMed

    Chen, Yanming; Zhao, Qingjie

    2015-01-01

    This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338

  4. Genes, information and sense: complexity and knowledge retrieval.

    PubMed

    Sadovsky, Michael G; Putintseva, Julia A; Shchepanovsky, Alexander S

    2008-06-01

    Information capacity of nucleotide sequences measures the unexpectedness of a continuation of a given string of nucleotides, thus having a sound relation to a variety of biological issues. A continuation is defined in a way maximizing the entropy of the ensemble of such continuations. The capacity is defined as a mutual entropy of real frequency dictionary of a sequence with respect to the one bearing the most expected continuations; it does not depend on the length of strings contained in a dictionary. Various genomes exhibit a multi-minima pattern of the dependence of information capacity on the string length, thus reflecting an order within a sequence. The strings with significant deviation of an expected frequency from the real one are the words of increased information value. Such words exhibit a non-random distribution alongside a sequence, thus making it possible to retrieve the correlation between a structure, and a function encoded within a sequence. PMID:18443840

  5. Thermodynamic aspects of information transfer in complex dynamical systems.

    PubMed

    Cafaro, Carlo; Ali, Sean Alan; Giffin, Adom

    2016-02-01

    From the Horowitz-Esposito stochastic thermodynamical description of information flows in dynamical systems [J. M. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014)], it is known that while the second law of thermodynamics is satisfied by a joint system, the entropic balance for the subsystems is adjusted by a term related to the mutual information exchange rate between the two subsystems. In this article, we present a quantitative discussion of the conceptual link between the Horowitz-Esposito analysis and the Liang-Kleeman work on information transfer between dynamical system components [X. S. Liang and R. Kleeman, Phys. Rev. Lett. 95, 244101 (2005)]. In particular, the entropic balance arguments employed in the two approaches are compared. Notwithstanding all differences between the two formalisms, our work strengthens the Liang-Kleeman heuristic balance reasoning by showing its formal analogy with the recent Horowitz-Esposito thermodynamic balance arguments. PMID:26986295

  6. Making sense in a complex landscape: how the Cynefin Framework from Complex Adaptive Systems Theory can inform health promotion practice.

    PubMed

    Van Beurden, Eric K; Kia, Annie M; Zask, Avigdor; Dietrich, Uta; Rose, Lauren

    2013-03-01

    Health promotion addresses issues from the simple (with well-known cause/effect links) to the highly complex (webs and loops of cause/effect with unpredictable, emergent properties). Yet there is no conceptual framework within its theory base to help identify approaches appropriate to the level of complexity. The default approach favours reductionism--the assumption that reducing a system to its parts will inform whole system behaviour. Such an approach can yield useful knowledge, yet is inadequate where issues have multiple interacting causes, such as social determinants of health. To address complex issues, there is a need for a conceptual framework that helps choose action that is appropriate to context. This paper presents the Cynefin Framework, informed by complexity science--the study of Complex Adaptive Systems (CAS). It introduces key CAS concepts and reviews the emergence and implications of 'complex' approaches within health promotion. It explains the framework and its use with examples from contemporary practice, and sets it within the context of related bodies of health promotion theory. The Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies and avoid the pitfalls of applying reductionist approaches to complex situations. The urgency to address critical issues such as climate change and the social determinants of health calls for us to engage with complexity science. The Cynefin Framework helps practitioners make the shift, and enables those already engaged in complex approaches to communicate the value and meaning of their work in a system that privileges reductionist approaches. PMID:22128193

  7. Evaluation of a Change Detection Methodology by Means of Binary Thresholding Algorithms and Informational Fusion Processes

    PubMed Central

    Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier

    2012-01-01

    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023

  8. Learning Syntactic Rules and Tags with Genetic Algorithms for Information Retrieval and Filtering: An Empirical Basis for Grammatical Rules.

    ERIC Educational Resources Information Center

    Losee, Robert M.

    1996-01-01

    The grammars of natural languages may be learned by using genetic algorithm systems such as LUST (Linguistics Using Sexual Techniques) that reproduce and mutate grammatical rules and parts-of-speech tags. In document retrieval or filtering systems, applying tags to the list of terms representing a document provides additional information about…

  9. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  10. Automatic algorithm for generating complex polyhedral scaffold structures for tissue engineering.

    PubMed

    Cheah, Chi-Mun; Chua, Chee-Kai; Leong, Kah-Fai; Cheong, Chee-How; Naing, May-Win

    2004-01-01

    In this article, an approach for tissue-engineering (TE) scaffold fabrication by way of integrating computer-based medical imaging, computer graphics, data manipulation techniques, computer-aided design (CAD), and rapid prototyping (RP) technologies is introduced. The aim is to provide a generic solution for the production of scaffolds that can potentially meet the diverse requirements of TE applications. In the work presented, a novel parametric library of open polyhedral unit cells is developed to assist the user in designing the microarchitecture of the scaffold according to the requirements of its final TE application. Once an open polyhedral unit cell design is selected and sized, a specially developed algorithm is employed to assemble the microarchitecture of the scaffold while adhering to the external geometry of the patient's anatomy generated from medical imaging data. RP fabrication techniques are then employed to build the scaffolds according to the CAD-generated designs. The combined application of such technologies promises unprecedented scaffold qualities with spatially and anatomically accurate three-dimensional forms as well as highly consistent and reproducible microarchitectures. The integrated system also has great potential in providing new cost-effective and rapid solutions to customized made-to-order TE scaffold production. PMID:15165476

  11. Comparison of CPU and GPU based coding on low-complexity algorithms for display signals

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Simon, Sven

    2013-09-01

    Graphics Processing Units (GPUs) are freely programmable massively parallel general purpose processing units and thus offer the opportunity to off-load heavy computations from the CPU to the GPU. One application for GPU programming is image compression, where the massively parallel nature of GPUs promises high speed benefits. This article analyzes the predicaments of data-parallel image coding on the example of two high-throughput coding algorithms. The codecs discussed here were designed to answer a call from the Video Electronics Standards Association (VESA), and require only minimal buffering at encoder and decoder side while avoiding any pixel-based feedback loops limiting the operating frequency of hardware implementations. Comparing CPU and GPU implementations of the codes show that GPU based codes are usually not considerably faster, or perform only with less than ideal rate-distortion performance. Analyzing the details of this result provides theoretical evidence that for any coding engine either parts of the entropy coding and bit-stream build-up must remain serial, or rate-distortion penalties must be paid when offloading all computations on the GPU.

  12. How Information Visualization Systems Change Users' Understandings of Complex Data

    ERIC Educational Resources Information Center

    Allendoerfer, Kenneth Robert

    2009-01-01

    User-centered evaluations of information systems often focus on the usability of the system rather its usefulness. This study examined how a using an interactive knowledge-domain visualization (KDV) system affected users' understanding of a domain. Interactive KDVs allow users to create graphical representations of domains that depict important…

  13. Considering Complex Objectives and Scarce Resources in Information Systems' Analysis.

    ERIC Educational Resources Information Center

    Crowther, Warren

    The low efficacy of many of the library and large-scale information systems that have been implemented in the developing countries has been disappointing, and their appropriateness is often questioned in the governmental and educational institutions of more industrialized countries beset by budget-crunching and a very dynamic transformation of…

  14. Seeking Information Online: The Influence of Menu Type, Navigation Path Complexity and Spatial Ability on Information Gathering Tasks

    ERIC Educational Resources Information Center

    Puerta Melguizo, Mari Carmen; Vidya, Uti; van Oostendorp, Herre

    2012-01-01

    We studied the effects of menu type, navigation path complexity and spatial ability on information retrieval performance and web disorientation or lostness. Two innovative aspects were included: (a) navigation path relevance and (b) information gathering tasks. As expected we found that, when measuring aspects directly related to navigation…

  15. An evaluation of the AMS/EPA Regulatory Model (AERMOD) complex terrain algorithms

    SciTech Connect

    Garrison, M.; Sherwell, J.

    1997-12-31

    A draft version of the AMS/EPA Regulatory Model (AERMOD) was made available to the public at the Sixth Conference on Air Quality Modeling, in August 1995. The model was also made available to beta testers as part of AMS and EPA`s on-going efforts to thoroughly evaluate the model prior to delivering a completed model to EPA for regulatory use. Since that time, AERMOD has undergone extensive diagnostic evaluation and some changes, with the goal of finalizing the model and subjecting it to performance evaluations with independent data bases prior to releasing the model for general use. The present study documented in this paper was initiated in the beta-testing program and has been continued under the sponsorship of the Maryland Department of Natural Resources Power Plant Research Program (PPRP). The study consists of an in-depth comparison of the complex terrain component of AERMOD, with a focus on neutral and stable-case impacts. Hourly concentration comparisons are made for a wide spectrum of synthesized meteorological conditions and for a broad range of stack characteristics representative of Maryland power plant stacks, between AERMOD predictions and predictions made by other available complex terrain models. The other models included the screening models RTDM and COMPLEX-I, and EPA`s refined CTDM. Predictions are also made and comparisons compiled based on alternative model options within AERMOD. The paper addresses model component-specific impacts in the case of CTDM and AERMOD, i.e. the LIFT and WRAP components representing flow above and around terrain, respectively, and uses graphical representation of model predictions extensively to illustrate model predictions. The paper describes the study approach, provides tabular and graphical summaries of the model and component-specific results, and offers some interpretations of model performance based on these intercomparisons.

  16. ISPTM: an iterative search algorithm for systematic identification of post-translational modifications from complex proteome mixtures.

    PubMed

    Huang, Xin; Huang, Lin; Peng, Hong; Guru, Ashu; Xue, Weihua; Hong, Sang Yong; Liu, Miao; Sharma, Seema; Fu, Kai; Caprez, Adam P; Swanson, David R; Zhang, Zhixin; Ding, Shi-Jian

    2013-09-01

    Identifying protein post-translational modifications (PTMs) from tandem mass spectrometry data of complex proteome mixtures is a highly challenging task. Here we present a new strategy, named iterative search for identifying PTMs (ISPTM), for tackling this challenge. The ISPTM approach consists of a basic search with no variable modification, followed by iterative searches of many PTMs using a small number of them (usually two) in each search. The performance of the ISPTM approach was evaluated on mixtures of 70 synthetic peptides with known modifications, on an 18-protein standard mixture with unknown modifications and on real, complex biological samples of mouse nuclear matrix proteins with unknown modifications. ISPTM revealed that many chemical PTMs were introduced by urea and iodoacetamide during sample preparation and many biological PTMs, including dimethylation of arginine and lysine, were significantly activated by Adriamycin treatment in nuclear matrix associated proteins. ISPTM increased the MS/MS spectral identification rate substantially, displayed significantly better sensitivity for systematic PTM identification compared with that of the conventional all-in-one search approach, and offered PTM identification results that were complementary to InsPecT and MODa, both of which are established PTM identification algorithms. In summary, ISPTM is a new and powerful tool for unbiased identification of many different PTMs with high confidence from complex proteome mixtures. PMID:23919725

  17. Spectral Dark Subtraction: A MODTRAN-Based Algorithm for Estimating Ground Reflectance without Atmospheric Information

    NASA Technical Reports Server (NTRS)

    Freedman, Ellis; Ryan, Robert; Pagnutti, Mary; Holekamp, Kara; Gasser, Gerald; Carver, David; Greer, Randy

    2007-01-01

    Spectral Dark Subtraction (SDS) provides good ground reflectance estimates across a variety of atmospheric conditions with no knowledge of those conditions. The algorithm may be sensitive to errors from stray light, calibration, and excessive haze/water vapor. SDS seems to provide better estimates than traditional algorithms using on-site atmospheric measurements much of the time.

  18. Thermodynamic aspects of information transfer in complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo; Ali, Sean Alan; Giffin, Adom

    2016-02-01

    From the Horowitz-Esposito stochastic thermodynamical description of information flows in dynamical systems [J. M. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014), 10.1103/PhysRevX.4.031015], it is known that while the second law of thermodynamics is satisfied by a joint system, the entropic balance for the subsystems is adjusted by a term related to the mutual information exchange rate between the two subsystems. In this article, we present a quantitative discussion of the conceptual link between the Horowitz-Esposito analysis and the Liang-Kleeman work on information transfer between dynamical system components [X. S. Liang and R. Kleeman, Phys. Rev. Lett. 95, 244101 (2005), 10.1103/PhysRevLett.95.244101]. In particular, the entropic balance arguments employed in the two approaches are compared. Notwithstanding all differences between the two formalisms, our work strengthens the Liang-Kleeman heuristic balance reasoning by showing its formal analogy with the recent Horowitz-Esposito thermodynamic balance arguments.

  19. Reciprocal Grids: A Hierarchical Algorithm for Computing Solution X-ray Scattering Curves from Supramolecular Complexes at High Resolution.

    PubMed

    Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri

    2016-08-22

    In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data. PMID:27410762

  20. Transparency and blur as selective cues for complex visual information

    NASA Astrophysics Data System (ADS)

    Colby, Grace; Scholl, Laura

    1991-08-01

    Image processing techniques are applied that enable the viewer to control both the gradients of focus and transparency within an image. In order to demonstrate this concept, the authors use a geographical map whose features are organized as layers of information. This allows a user to select layers related to a particular area of interest. For example, someone interested in air transportation may choose to view airports, airport labels, and airspace in full focus. Relevant layers such as the roads and waterways are also visible but appear somewhat blurry and transparent. The user's attention is drawn to information that is clearly in focus and opaque; blurry transparent features are perceived to be in the background. Focus and transparency produce effective perceptual cues because of the human eye's ability to perceive contrast and depth. The control of focus and transparency are made accessible through a graphic interface based on a scale of importance. Rather than specifying individual focus and transparency settings, the user specifies the importance of the individual feature layers according to his needs for the task at hand. The importance settings are then translated into an appropriate combination of transparency and focus gradients for the layers within the image.

  1. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  2. The Influence of Information Acquisition on the Complex Dynamics of Market Competition

    NASA Astrophysics Data System (ADS)

    Guo, Zhanbing; Ma, Junhai

    In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.

  3. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  4. A critical evaluation of numerical algorithms and flow physics in complex supersonic flows

    NASA Astrophysics Data System (ADS)

    Aradag, Selin

    In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated

  5. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-01

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ . PMID:23163785

  6. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model.

    PubMed

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  7. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model

    PubMed Central

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  8. A Correlational Study Assessing the Relationships among Information Technology Project Complexity, Project Complication, and Project Success

    ERIC Educational Resources Information Center

    Williamson, David J.

    2011-01-01

    The specific problem addressed in this study was the low success rate of information technology (IT) projects in the U.S. Due to the abstract nature and inherent complexity of software development, IT projects are among the most complex projects encountered. Most existing schools of project management theory are based on the rational systems…

  9. Using measures of information content and complexity of time series as hydrologic metrics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...

  10. Multicriteria Analysis: Managing Complexity in Selecting a Student-Information System.

    ERIC Educational Resources Information Center

    Blanchard, William; And Others

    1989-01-01

    The complexity of Seattle University's decision to replace three separate computerized student information systems with one integrated system was managed with a multicriteria method for evaluating alternatives. The method both managed a large amount of information and reduced people's resistance to change. (MSE)

  11. Combining complexity measures of EEG data: multiplying measures reveal previously hidden information

    PubMed Central

    Burns, Thomas; Rajan, Ramesh

    2015-01-01

    Many studies have noted significant differences among human electroencephalograph (EEG) results when participants or patients are exposed to different stimuli, undertaking different tasks, or being affected by conditions such as epilepsy or Alzheimer's disease. Such studies often use only one or two measures of complexity and do not regularly justify their choice of measure beyond the fact that it has been used in previous studies. If more measures were added to such studies, however, more complete information might be found about these reported differences. Such information might be useful in confirming the existence or extent of such differences, or in understanding their physiological bases. In this study we analysed publically-available EEG data using a range of complexity measures to determine how well the measures correlated with one another. The complexity measures did not all significantly correlate, suggesting that different measures were measuring unique features of the EEG signals and thus revealing information which other measures were unable to detect. Therefore, the results from this analysis suggests that combinations of complexity measures reveal unique information which is in addition to the information captured by other measures of complexity in EEG data. For this reason, researchers using individual complexity measures for EEG data should consider using combinations of measures to more completely account for any differences they observe and to ensure the robustness of any relationships identified. PMID:26594331

  12. Combining complexity measures of EEG data: multiplying measures reveal previously hidden information.

    PubMed

    Burns, Thomas; Rajan, Ramesh

    2015-01-01

    Many studies have noted significant differences among human electroencephalograph (EEG) results when participants or patients are exposed to different stimuli, undertaking different tasks, or being affected by conditions such as epilepsy or Alzheimer's disease. Such studies often use only one or two measures of complexity and do not regularly justify their choice of measure beyond the fact that it has been used in previous studies. If more measures were added to such studies, however, more complete information might be found about these reported differences. Such information might be useful in confirming the existence or extent of such differences, or in understanding their physiological bases. In this study we analysed publically-available EEG data using a range of complexity measures to determine how well the measures correlated with one another. The complexity measures did not all significantly correlate, suggesting that different measures were measuring unique features of the EEG signals and thus revealing information which other measures were unable to detect. Therefore, the results from this analysis suggests that combinations of complexity measures reveal unique information which is in addition to the information captured by other measures of complexity in EEG data. For this reason, researchers using individual complexity measures for EEG data should consider using combinations of measures to more completely account for any differences they observe and to ensure the robustness of any relationships identified. PMID:26594331

  13. Communication: A reduced-space algorithm for the solution of the complex linear response equations used in coupled cluster damped response theory

    NASA Astrophysics Data System (ADS)

    Kauczor, Joanna; Norman, Patrick; Christiansen, Ove; Coriani, Sonia

    2013-12-01

    We present a reduced-space algorithm for solving the complex (damped) linear response equations required to compute the complex linear response function for the hierarchy of methods: coupled cluster singles, coupled cluster singles and iterative approximate doubles, and coupled cluster singles and doubles. The solver is the keystone element for the development of damped coupled cluster response methods for linear and nonlinear effects in resonant frequency regions.

  14. Non-Algorithmic Access to Calendar Information in a Calendar Calculator with Autism

    ERIC Educational Resources Information Center

    Mottron, L.; Lemmens, K.; Gagnon, L.; Seron, X.

    2006-01-01

    The possible use of a calendar algorithm was assessed in DBC, an autistic "savant" of normal measured intelligence. Testing of all the dates in a year revealed a random distribution of errors. Re-testing DBC on the same dates one year later shows that his errors were not stable across time. Finally, DBC was able to answer "reversed" questions that…

  15. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and

  16. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    PubMed Central

    Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael

    2009-01-01

    Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural

  17. Balance between noise and information flow maximizes set complexity of network dynamics.

    PubMed

    Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti

    2013-01-01

    Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395

  18. A complex network peer-to-peer system for geographic information services discovery

    NASA Astrophysics Data System (ADS)

    Shen, Shengyu; Wu, Huayi

    2008-12-01

    With the rapid development and application of Internet technology, Geographic Information System has stepped into a new age with its main form as Geographic Information Services. Although there are so many Geographic Information Services available on the Internet now, they are still in very low rate of application. To facilitate the discovery, some proposals for Geographic Information Services infrastructures focus on centralized service registry (UDDI, Universal Description, Discovery and Integration ) for cataloguing their geospatial functions and characteristics. Centralized systems introduce single points of failure, hotspots in the network and expose vulnerability to malicious attacks. In order to solve the problem above, this paper proposes A Complex Network Peer-to-Peer Approach for Geospatial Web Services Discovery. Based on complex network theory, a Peer-to-Peer network has been established, and it takes the charge of each peer's communication and management, and an EBRIM registry centre has been inserted into each peer for the registry and query of Geographic Information Services.

  19. Reconstruction of hyperspectral reflectance for optically complex turbid inland lakes: test of a new scheme and implications for inversion algorithms.

    PubMed

    Sun, Deyong; Hu, Chuanmin; Qiu, Zhongfeng; Wang, Shengqiang

    2015-06-01

    A new scheme has been proposed by Lee et al. (2014) to reconstruct hyperspectral (400 - 700 nm, 5 nm resolution) remote sensing reflectance (Rrs(λ), sr-1) of representative global waters using measurements at 15 spectral bands. This study tested its applicability to optically complex turbid inland waters in China, where Rrs(λ) are typically much higher than those used in Lee et al. (2014). Strong interdependence of Rrs(λ) between neighboring bands (≤ 10 nm interval) was confirmed, with Pearson correlation coefficient (PCC) mostly above 0.98. The scheme of Lee et al. (2014) for Rrs(λ) re-construction with its original global parameterization worked well with this data set, while new parameterization showed improvement in reducing uncertainties in the reconstructed Rrs(λ). Mean absolute error (MAERrsi)) in the reconstructed Rrs(λ) was mostly < 0.0002 sr-1 between 400 and 700nm, and mean relative error (MRERrsi)) was < 1% when the comparison was made between reconstructed and measured Rrs(λ) spectra. When Rrs(λ) at the MODIS bands were used to reconstruct the hyperspectral Rrs(λ), MAERrsi) was < 0.001 sr-1 and MRERrsi) was < 3%. When Rrs(λ) at the MERIS bands were used, MAERrsi) in the reconstructed hyperspectral Rrs(λ) was < 0.0004 sr-1 and MRERrsi) was < 1%. These results have significant implications for inversion algorithms to retrieve concentrations of phytoplankton pigments (e.g., chlorophyll-a or Chla, and phycocyanin or PC) and total suspended materials (TSM) as well as absorption coefficient of colored dissolved organic matter (CDOM), as some of the algorithms were developed from in situ Rrs(λ) data using spectral bands that

  20. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    ERIC Educational Resources Information Center

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  1. Entropy measures for networks: toward an information theory of complex topologies.

    PubMed

    Anand, Kartik; Bianconi, Ginestra

    2009-10-01

    The quantification of the complexity of networks is, today, a fundamental problem in the physics of complex systems. A possible roadmap to solve the problem is via extending key concepts of information theory to networks. In this Rapid Communication we propose how to define the Shannon entropy of a network ensemble and how it relates to the Gibbs and von Neumann entropies of network ensembles. The quantities we introduce here will play a crucial role for the formulation of null models of networks through maximum-entropy arguments and will contribute to inference problems emerging in the field of complex networks. PMID:19905379

  2. The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At the grand opening of the newly expanded KSC Visitor Complex, Center Director Roy Bridges addresses guests and the media. The $13 million addition to the Visitor Complex includes an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks, a new information center, films, and exhibits. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.

  3. Mining biological information from 3D short time-series gene expression data: the OPTricluster algorithm

    PubMed Central

    2012-01-01

    Background Nowadays, it is possible to collect expression levels of a set of genes from a set of biological samples during a series of time points. Such data have three dimensions: gene-sample-time (GST). Thus they are called 3D microarray gene expression data. To take advantage of the 3D data collected, and to fully understand the biological knowledge hidden in the GST data, novel subspace clustering algorithms have to be developed to effectively address the biological problem in the corresponding space. Results We developed a subspace clustering algorithm called Order Preserving Triclustering (OPTricluster), for 3D short time-series data mining. OPTricluster is able to identify 3D clusters with coherent evolution from a given 3D dataset using a combinatorial approach on the sample dimension, and the order preserving (OP) concept on the time dimension. The fusion of the two methodologies allows one to study similarities and differences between samples in terms of their temporal expression profile. OPTricluster has been successfully applied to four case studies: immune response in mice infected by malaria (Plasmodium chabaudi), systemic acquired resistance in Arabidopsis thaliana, similarities and differences between inner and outer cotyledon in Brassica napus during seed development, and to Brassica napus whole seed development. These studies showed that OPTricluster is robust to noise and is able to detect the similarities and differences between biological samples. Conclusions Our analysis showed that OPTricluster generally outperforms other well known clustering algorithms such as the TRICLUSTER, gTRICLUSTER and K-means; it is robust to noise and can effectively mine the biological knowledge hidden in the 3D short time-series gene expression data. PMID:22475802

  4. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  5. Systematic Study of Information Measures, Statistical Complexity and Atomic Structure Properties

    NASA Astrophysics Data System (ADS)

    Chatzisavvas, K. Ch.; Tserkis, S. T.; Panos, C. P.; Moustakidis, Ch. C.

    2015-05-01

    We present a comparative study of several information and statistical complexity measures in order to examine a possible correlation with certain experimental properties of atomic structure. Comparisons are also carried out quantitatively using Pearson correlation coefficient. In particular, it is shown that Fisher information in momentum space is very sensitive to shell effects. It is also seen that three measures expressed in momentum space that is Fisher information, Fisher-Shannon plane and LMC complexity are associated with atomic radius, ionization energy, electronegativity, and atomic dipole polarizability. Our results indicate that a momentum space treatment of atomic periodicity is superior to a position space one. Finally we present a relation that emerges between Fisher information and the second moment of the probability distribution in momentum space i.e. an energy functional of interest in (e,2e) experiments.

  6. Quantifying information transfer and mediation along causal pathways in complex systems

    NASA Astrophysics Data System (ADS)

    Runge, Jakob

    2015-12-01

    Measures of information transfer have become a popular approach to analyze interactions in complex systems such as the Earth or the human brain from measured time series. Recent work has focused on causal definitions of information transfer aimed at decompositions of predictive information about a target variable, while excluding effects of common drivers and indirect influences. While common drivers clearly constitute a spurious causality, the aim of the present article is to develop measures quantifying different notions of the strength of information transfer along indirect causal paths, based on first reconstructing the multivariate causal network. Another class of novel measures quantifies to what extent different intermediate processes on causal paths contribute to an interaction mechanism to determine pathways of causal information transfer. The proposed framework complements predictive decomposition schemes by focusing more on the interaction mechanism between multiple processes. A rigorous mathematical framework allows for a clear information-theoretic interpretation that can also be related to the underlying dynamics as proven for certain classes of processes. Generally, however, estimates of information transfer remain hard to interpret for nonlinearly intertwined complex systems. But if experiments or mathematical models are not available, then measuring pathways of information transfer within the causal dependency structure allows at least for an abstraction of the dynamics. The measures are illustrated on a climatological example to disentangle pathways of atmospheric flow over Europe.

  7. Quantifying information transfer and mediation along causal pathways in complex systems.

    PubMed

    Runge, Jakob

    2015-12-01

    Measures of information transfer have become a popular approach to analyze interactions in complex systems such as the Earth or the human brain from measured time series. Recent work has focused on causal definitions of information transfer aimed at decompositions of predictive information about a target variable, while excluding effects of common drivers and indirect influences. While common drivers clearly constitute a spurious causality, the aim of the present article is to develop measures quantifying different notions of the strength of information transfer along indirect causal paths, based on first reconstructing the multivariate causal network. Another class of novel measures quantifies to what extent different intermediate processes on causal paths contribute to an interaction mechanism to determine pathways of causal information transfer. The proposed framework complements predictive decomposition schemes by focusing more on the interaction mechanism between multiple processes. A rigorous mathematical framework allows for a clear information-theoretic interpretation that can also be related to the underlying dynamics as proven for certain classes of processes. Generally, however, estimates of information transfer remain hard to interpret for nonlinearly intertwined complex systems. But if experiments or mathematical models are not available, then measuring pathways of information transfer within the causal dependency structure allows at least for an abstraction of the dynamics. The measures are illustrated on a climatological example to disentangle pathways of atmospheric flow over Europe. PMID:26764766

  8. Research on the influence of scan path of image on the performance of information hiding algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Su; Xie, Chengjun; Huang, Ruirui; Xu, Xiaolong

    2015-12-01

    This paper carried out a study on information hiding performance using technology of histogram shift combining hybrid transform. As the approach of data selection, scan path of images is discussed. Ten paths were designed and tested on international standard testing images. Experiment results indicate that scan path has a great influence on the performance of image lossless information hiding. For selected test image, the peak of optimized path increased up to 9.84% while that of the worst path dropped 24.2%, that is to say, for different test images, scan path greatly impacts information hiding performance by influencing image redundancy and sparse matrix.

  9. The Use of Complexity Theory and Strange Attractors to Understand and Explain Information System Development

    ERIC Educational Resources Information Center

    Tomasino, Arthur P.

    2013-01-01

    In spite of the best efforts of researchers and practitioners, Information Systems (IS) developers are having problems "getting it right". IS developments are challenged by the emergence of unanticipated IS characteristics undermining managers ability to predict and manage IS change. Because IS are complex, development formulas, best…

  10. Further Understanding of Complex Information Processing in Verbal Adolescents and Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Williams, Diane L.; Minshew, Nancy J.; Goldstein, Gerald

    2015-01-01

    More than 20?years ago, Minshew and colleagues proposed the Complex Information Processing model of autism in which the impairment is characterized as a generalized deficit involving multiple modalities and cognitive domains that depend on distributed cortical systems responsible for higher order abilities. Subsequent behavioral work revealed a…

  11. The Readability and Complexity of District-Provided School-Choice Information

    ERIC Educational Resources Information Center

    Stein, Marc L.; Nagro, Sarah

    2015-01-01

    Public school choice has become a common feature in American school districts. Any potential benefits that could be derived from these policies depend heavily on the ability of parents and students to make informed and educated decisions about their school options. We examined the readability and complexity of school-choice guides across a sample…

  12. Linguistic Complexity and Information Structure in Korean: Evidence from Eye-Tracking during Reading

    ERIC Educational Resources Information Center

    Lee, Yoonhyoung; Lee, Hanjung; Gordon, Peter C.

    2007-01-01

    The nature of the memory processes that support language comprehension and the manner in which information packaging influences online sentence processing were investigated in three experiments that used eye-tracking during reading to measure the ease of understanding complex sentences in Korean. All three experiments examined reading of embedded…

  13. Multicriteria Analysis: Managing Complexity in Selecting a Student-Information System. AIR 1988 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Blanchard, William; And Others

    Seattle University recently decided to replace three separate, computerized student-information systems with a single, integrated system. The complexity of this decision was managed with a multicriteria method that was used to evaluate alternative systems. The method took into account the many and sometimes conflicting concerns of the people who…

  14. KID - an algorithm for fast and efficient text mining used to automatically generate a database containing kinetic information of enzymes

    PubMed Central

    2010-01-01

    Background The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed. Description Here we present a text mining algorithm for the extraction of kinetic information such as KM, Ki, kcat etc. as well as associated information such as enzyme names, EC numbers, ligands, organisms, localisations, pH and temperatures. Using this rule- and dictionary-based approach, it was possible to extract 514,394 kinetic parameters of 13 categories (KM, Ki, kcat, kcat/KM, Vmax, IC50, S0.5, Kd, Ka, t1/2, pI, nH, specific activity, Vmax/KM) from about 17 million PubMed abstracts and combine them with other data in the abstract. A manual verification of approx. 1,000 randomly chosen results yielded a recall between 51% and 84% and a precision ranging from 55% to 96%, depending of the category searched. The results were stored in a database and are available as "KID the KInetic Database" via the internet. Conclusions The presented algorithm delivers a considerable amount of information and therefore may aid to accelerate the research and the automated analysis required for today's systems biology approaches. The database obtained by analysing PubMed abstracts may be a valuable help in the field of chemical and biological kinetics. It is completely based upon text mining and therefore complements manually curated databases. The database is available at http://kid.tu-bs.de. The source code of the algorithm is provided under the GNU General Public Licence and available on

  15. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  16. The organization of intrinsic computation: Complexity-entropy diagrams and the diversity of natural information processing

    NASA Astrophysics Data System (ADS)

    Feldman, David P.; McTague, Carl S.; Crutchfield, James P.

    2008-12-01

    Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called "edge of chaos." Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

  17. The organization of intrinsic computation: complexity-entropy diagrams and the diversity of natural information processing.

    PubMed

    Feldman, David P; McTague, Carl S; Crutchfield, James P

    2008-12-01

    Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called "edge of chaos." Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation. PMID:19123616

  18. Communication: Exciton-phonon information flow in the energy transfer process of photosynthetic complexes

    SciTech Connect

    Rebentrost, P.; Aspuru-Guzik, Alan

    2011-03-14

    Non-Markovian and nonequilibrium phonon effects are believed to be key ingredients in the energy transfer in photosynthetic complexes, especially in complexes which exhibit a regime of intermediate exciton–phonon coupling. In this work, we utilize a recently developed measure for non-Markovianity to elucidate the exciton–phonon dynamics in terms of the information flow between electronic and vibrational degrees of freedom. We study the measure in the hierarchical equation of motion approach which captures strong coupling effects and nonequilibrium molecular reorganization. We propose an additional trace distance measure for the information flow that could be extended to other master equations. We find that for a model dimer system and for the Fenna–Matthews–Olson complex the non-Markovianity is significant under physiological conditions.

  19. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  20. A novel seizure detection algorithm informed by hidden Markov model event states

    NASA Astrophysics Data System (ADS)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h‑1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  1. Patterns of patient safety culture: a complexity and arts-informed project of knowledge translation.

    PubMed

    Mitchell, Gail J; Tregunno, Deborah; Gray, Julia; Ginsberg, Liane

    2011-01-01

    The purpose of this paper is to describe patterns of patient safety culture that emerged from an innovative collaboration among health services researchers and fine arts colleagues. The group engaged in an arts-informed knowledge translation project to produce a dramatic expression of patient safety culture research for inclusion in a symposium. Scholars have called for a deeper understanding of the complex interrelationships among structure, process and outcomes relating to patient safety. Four patterns of patient safety culture--blinding familiarity, unyielding determination, illusion of control and dismissive urgency--are described with respect to how they informed creation of an arts-informed project for knowledge translation. PMID:22273559

  2. An instance-based algorithm with auxiliary similarity information for the estimation of gait kinematics from wearable sensors.

    PubMed

    Goulermas, John Y; Findlow, Andrew H; Nester, Christopher J; Liatsis, Panos; Zeng, Xiao-Jun; Kenney, Laurence P J; Tresadern, Phil; Thies, Sibylle B; Howard, David

    2008-09-01

    Wearable human movement measurement systems are increasingly popular as a means of capturing human movement data in real-world situations. Previous work has attempted to estimate segment kinematics during walking from foot acceleration and angular velocity data. In this paper, we propose a novel neural network [GRNN with Auxiliary Similarity Information (GASI)] that estimates joint kinematics by taking account of proximity and gait trajectory slope information through adaptive weighting. Furthermore, multiple kernel bandwidth parameters are used that can adapt to the local data density. To demonstrate the value of the GASI algorithm, hip, knee, and ankle joint motions are estimated from acceleration and angular velocity data for the foot and shank, collected using commercially available wearable sensors. Reference hip, knee, and ankle kinematic data were obtained using externally mounted reflective markers and infrared cameras for subjects while they walked at different speeds. The results provide further evidence that a neural net approach to the estimation of joint kinematics is feasible and shows promise, but other practical issues must be addressed before this approach is mature enough for clinical implementation. Furthermore, they demonstrate the utility of the new GASI algorithm for making estimates from continuous periodic data that include noise and a significant level of variability. PMID:18779089

  3. The Use of a Parallel Data Processing and Error Analysis System (DPEAS) for the Observational Exploration of Complex Multi-Satellite Non-Gaussian Data Assimilation Algorithms

    NASA Astrophysics Data System (ADS)

    Jones, A. S.; Fletcher, S. J.; Kidder, S. Q.; Forsythe, J. M.

    2012-12-01

    The CSU/NOAA Data Processing and Error Analysis System (DPEAS) was created to merge, or blend, multiple satellite and model data sets within a single consistent framework. DPEAS is designed to be used at both research and operational facilities to facilitate Research-to-Operations technology transfers. The system supports massive parallelization via grid computing technologies, and hosts data fusion techniques for transference to 24/7 operations in a low cost computational environment. In this work, we highlight the data assimilation and data fusion methodologies of the DPEAS framework that facilitates new and complex multi-satellite non-Gaussian data assimilation algorithm developments. DPEAS is in current operational use at NOAA/NESDIS Office of Satellite and Product Operations (OSPO) and performs multi-product data fusion of global "blended" Total Precipitable Water (bTPW) and blended Rainfall Rate (bRR). In this work we highlight: 1) the current dynamic inter-satellite calibration processing performed within the DPEAS data fusion and error analysis, 2) as well as our DPEAS development plans for future blended products (AMSR-2 and Megha-Tropiques), and 3) layered TPW products using the NASA AIRS data for National Weather Service forecaster use via the NASA SPoRT facility at Huntsville, AL. We also discuss new system additions for cloud verification and prediction activities in collaboration with the National Center for Atmospheric Research (NCAR), and planned use with the USAF Air Force Weather Agency's (AFWA) global Cloud Depiction and Forecast System (CDFS) facilities. Scientifically, we focus on the data fusion of atmospheric and land surface product information, including global cloud and water vapor data sets, soil moisture data, and specialized land surface products. The data fusion methods include the use of 1DVAR data assimilation for satellite sounding data sets, and numerous real-time statistical analysis methods. Our new development activities to

  4. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  5. Complexity of line-seru conversion for different scheduling rules and two improved exact algorithms for the multi-objective optimization.

    PubMed

    Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei

    2016-01-01

    Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms. PMID:27390649

  6. Spatial and Social Diffusion of Information and Influence: Models and Algorithms

    ERIC Educational Resources Information Center

    Doo, Myungcheol

    2012-01-01

    In this dissertation research, we argue that spatial alarms and activity-based social networks are two fundamentally new types of information and influence diffusion channels. Such new channels have the potential of enriching our professional experiences and our personal life quality in many unprecedented ways. First, we develop an activity driven…

  7. Can Research Inform Classroom Practice?: The Particular Case of Buggy Algorithms and Subtraction Errors.

    ERIC Educational Resources Information Center

    McNamara, David; Pettitt, Deirdre

    1991-01-01

    Reviews a body of psychological research which investigated children's errors in subtraction computations to assess whether the literature offers valuable, relevant information for those teaching subtraction and remedying student errors. It concludes that the research offers teachers little, so they must fall back on their knowledge and…

  8. On Using Genetic Algorithms for Multimodal Relevance Optimization in Information Retrieval.

    ERIC Educational Resources Information Center

    Boughanem, M.; Christment, C.; Tamine, L.

    2002-01-01

    Presents a genetic relevance optimization process performed in an information retrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…

  9. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism. PMID:15958508

  10. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  11. Molecular dynamics of protein kinase-inhibitor complexes: a valid structural information.

    PubMed

    Caballero, Julio; Alzate-Morales, Jans H

    2012-01-01

    Protein kinases (PKs) are key components of protein phosphorylation based signaling networks in eukaryotic cells. They have been identified as being implicated in many diseases. High-resolution X-ray crystallographic data exist for many PKs and, in many cases, these structures are co-complexed with inhibitors. Although this valuable information confirms the precise structure of PKs and their complexes, it ignores the dynamic movements of the structures which are relevant to explain the affinities and selectivity of the ligands, to characterize the thermodynamics of the solvated complexes, and to derive predictive models. Atomistic molecular dynamics (MD) simulations present a convenient way to study PK-inhibitor complexes and have been increasingly used in recent years in structure-based drug design. MD is a very useful computational method and a great counterpart for experimentalists, which helps them to derive important additional molecular information. That enables them to follow and understand structure and dynamics of protein-ligand systems with extreme molecular detail on scales where motion of individual atoms can be tracked. MD can be used to sample dynamic molecular processes, and can be complemented with more advanced computational methods (e.g., free energy calculations, structure-activity relationship analysis). This review focuses on the most commonly applications to study PK-inhibitor complexes using MD simulations. Our aim is that researchers working in the design of PK inhibitors be aware of the benefits of this powerful tool in the design of potent and selective PK inhibitors. PMID:22571663

  12. Characteristics analysis of acupuncture electroencephalograph based on mutual information Lempel—Ziv complexity

    NASA Astrophysics Data System (ADS)

    Luo, Xi-Liu; Wang, Jiang; Han, Chun-Xiao; Deng, Bin; Wei, Xi-Le; Bian, Hong-Rui

    2012-02-01

    As a convenient approach to the characterization of cerebral cortex electrical information, electroencephalograph (EEG) has potential clinical application in monitoring the acupuncture effects. In this paper, a method composed of the mutual information method and Lempel—Ziv complexity method (MILZC) is proposed to investigate the effects of acupuncture on the complexity of information exchanges between different brain regions based on EEGs. In the experiments, eight subjects are manually acupunctured at ‘Zusanli’ acupuncture point (ST-36) with different frequencies (i.e., 50, 100, 150, and 200 times/min) and the EEGs are recorded simultaneously. First, MILZC values are compared in general. Then average brain connections are used to quantify the effectiveness of acupuncture under the above four frequencies. Finally, significance index P values are used to study the spatiality of the acupuncture effect on the brain. Three main findings are obtained: (i) MILZC values increase during the acupuncture; (ii) manual acupunctures (MAs) with 100 times/min and 150 times/min are more effective than with 50 times/min and 200 times/min; (iii) contralateral hemisphere activation is more prominent than ipsilateral hemisphere's. All these findings suggest that acupuncture contributes to the increase of brain information exchange complexity and the MILZC method can successfully describe these changes.

  13. Analysis of information gain and Kolmogorov complexity for structural evaluation of cellular automata configurations

    NASA Astrophysics Data System (ADS)

    Javaheri Javid, Mohammad Ali; Blackwell, Tim; Zimmer, Robert; Majid al-Rifaie, Mohammad

    2016-04-01

    Shannon entropy fails to discriminate structurally different patterns in two-dimensional images. We have adapted information gain measure and Kolmogorov complexity to overcome the shortcomings of entropy as a measure of image structure. The measures are customised to robustly quantify the complexity of images resulting from multi-state cellular automata (CA). Experiments with a two-dimensional multi-state cellular automaton demonstrate that these measures are able to predict some of the structural characteristics, symmetry and orientation of CA generated patterns.

  14. Novel Algorithms for the Identification of Biologically Informative Chemical Diversity Metrics

    PubMed Central

    Theertham, Bhargav; Wang, Jenna. L.; Fang, Jianwen; Lushington, Gerald H.

    2009-01-01

    Despite great advances in the efficiency of analytical and synthetic chemistry, time and available starting material still limit the number of unique compounds that can be practically synthesized and evaluated as prospective therapeutics. Chemical diversity analysis (the capacity to identify finite diverse subsets that reliably represent greater manifolds of drug-like chemicals) thus remains an important resource in drug discovery. Despite an unproven track record, chemical diversity has also been used to posit, from preliminary screen hits, new compounds with similar or better activity. Identifying diversity metrics that demonstrably encode bioactivity trends is thus of substantial potential value for intelligent assembly of targeted screens. This paper reports novel algorithms designed to simultaneously reflect chemical similarity or diversity trends and apparent bioactivity in compound collections. An extensive set of descriptors are evaluated within large NCI screening data sets according to bioactivity differentiation capacities, quantified as the ability to co-localize known active species into bioactive-rich K-means clusters. One method tested for descriptor selection orders features according to relative variance across a set of training compounds, and samples increasingly finer subset meshes for descriptors whose exclusion from the model induces drastic drops in relative bioactive colocalization. This yields metrics with reasonable bioactive enrichment (greater than 50% of all bioactive compounds collected into clusters or cells with significantly enriched active/inactive rates) for each of the four data sets examined herein. A second method replaces variance by an active/inactive divergence score, achieving comparable enrichment via a much more efficient search process. Combinations of the above metrics are tested in 2D rectilinear diversity models, achieving similarly successful colocalization statistics, with metrics derived from the active

  15. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  16. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    SciTech Connect

    Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.

    1993-08-01

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.

  17. Optimization of IMRT using multi-objective evolutionary algorithms with regularization: A study of complexity vs. deliverability

    NASA Astrophysics Data System (ADS)

    Tom, Brian C.

    Intensity Modulated Radiation Therapy (IMRT) has enjoyed success in the clinic by achieving dose escalation to the target while sparing nearby critical structures. For DMLC plans, regularization is introduced in order to smooth the fluence maps. In this dissertation, regularization is used to smooth the fluence profiles. Since SMLC plans have a limited number of intensity levels, smoothing is not a problem. However, in many treatment planning systems, the plans are optimized with beam weights that are continuous. Only after the optimization is complete is when the fluence maps are quantized. This dissertation will study the effects, if any, of quantizing the beam weights. In order to study both smoothing DMLC plans and the quantization of SMLC plans, a multi-objective evolutionary algorithm is employed as the optimization method. The main advantages of using these stochastic algorithms is that the beam weights can be represented either in binary or real strings. Clearly, a binary representation is suited for SMLC delivery (discrete intensity levels), while a real representation is more suited for DMLC. Further, in the case of real beam weights, multi-objective evolutionary algorithms can handle conflicting objective functions very well. In fact, regularization can be thought of as having two competing functions: to maintain fidelity to the data, and smoothing the data. The main disadvantage of regularization is the need to specify the regularization parameter, which controls how important the two objectives are relative to one another. Multi-objective evolutionary algorithms do not need such a parameter. In addition, such algorithms yield a set of solutions, each solution representing differing importance factors of the two (or more) objective functions. Multi-objective evolutionary algorithms can thus be used to study the effects of quantizing the beam weights for SMLC delivery systems as well studying how regularization can reduce the difference between the

  18. An algorithm to correct 2D near-infrared fluorescence signals using 3D intravascular ultrasound architectural information

    NASA Astrophysics Data System (ADS)

    Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis

    2011-03-01

    Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.

  19. Enhancing radar estimates of precipitation over complex terrain using information derived from an orographic precipitation model

    NASA Astrophysics Data System (ADS)

    Crochet, Philippe

    2009-10-01

    SummaryThe objective of this paper is to present a radar-based quantitative precipitation estimation algorithm and assess its quality over the complex terrain of western Iceland. The proposed scheme deals with the treatment of beam blockage, anomalous propagation, vertical profile of reflectivity and includes a radar adjustment technique compensating for range, orographic effects and variations in the Z-R relationship. The quality of the estimated precipitation is remarkably enhanced after post-processing and in reasonably good agreement with what is known about the spatial distribution of precipitation in the studied area from both rain gauge observations and a gridded dataset derived from an orographic precipitation model. The results suggest that this methodology offers a credible solution to obtain an estimate of the distribution of precipitation in mountainous terrain and appears to be of practical value to meteorologists and hydrologists.

  20. Beyond information access: Support for complex cognitive activities in public health informatics tools.

    PubMed

    Sedig, Kamran; Parsons, Paul; Dittmer, Mark; Ola, Oluwakemi

    2012-01-01

    Public health professionals work with a variety of information sources to carry out their everyday activities. In recent years, interactive computational tools have become deeply embedded in such activities. Unlike the early days of computational tool use, the potential of tools nowadays is not limited to simply providing access to information; rather, they can act as powerful mediators of human-information discourse, enabling rich interaction with public health information. If public health informatics tools are designed and used properly, they can facilitate, enhance, and support the performance of complex cognitive activities that are essential to public health informatics, such as problem solving, forecasting, sense-making, and planning. However, the effective design and evaluation of public health informatics tools requires an understanding of the cognitive and perceptual issues pertaining to how humans work and think with information to perform such activities. This paper draws on research that has examined some of the relevant issues, including interaction design, complex cognition, and visual representations, to offer some human-centered design and evaluation considerations for public health informatics tools. PMID:23569645

  1. Beyond information access: Support for complex cognitive activities in public health informatics tools

    PubMed Central

    Sedig, Kamran; Parsons, Paul; Dittmer, Mark; Ola, Oluwakemi

    2012-01-01

    Public health professionals work with a variety of information sources to carry out their everyday activities. In recent years, interactive computational tools have become deeply embedded in such activities. Unlike the early days of computational tool use, the potential of tools nowadays is not limited to simply providing access to information; rather, they can act as powerful mediators of human-information discourse, enabling rich interaction with public health information. If public health informatics tools are designed and used properly, they can facilitate, enhance, and support the performance of complex cognitive activities that are essential to public health informatics, such as problem solving, forecasting, sense-making, and planning. However, the effective design and evaluation of public health informatics tools requires an understanding of the cognitive and perceptual issues pertaining to how humans work and think with information to perform such activities. This paper draws on research that has examined some of the relevant issues, including interaction design, complex cognition, and visual representations, to offer some human-centered design and evaluation considerations for public health informatics tools. PMID:23569645

  2. A novel Dual Probe Complex Trial Protocol for detection of concealed information.

    PubMed

    Labkovsky, Elena; Rosenfeld, J Peter

    2014-11-01

    In simply guilty (SG), countermeasure-using guilty (CM), and innocent (IN) subjects, a new concealed information test, the P300-based Dual Probe Complex Trial Protocol was tested in a mock crime scenario. It combines an oddball protocol with two stimuli (probe, irrelevant) and another with three stimuli (probe, irrelevant, target) into one trial, doubling detected mock crime information per unit time, compared to previous protocols. Probe-irrelevant amplitude differences were significant in SG and CM, but not IN subjects. On a measure from both two and three stimulus protocol parts of the Dual Probe Complex Trial Protocol trial, accuracy was 94.7% (based on a .9 bootstrap criterion). The criterion-independent area (AUC) under the receiver operating characteristic (from signal detection theory) measuring SG and CM versus IN discriminability averaged .92 (in a range of 0.5-1.0). Countermeasures enhanced irrelevant (not probe) P300s in CM groups. PMID:24981064

  3. Biological Data Analysis as an Information Theory Problem: Multivariable Dependence Measures and the Shadows Algorithm

    PubMed Central

    Sakhanenko, Nikita A.

    2015-01-01

    Abstract Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables.  The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the “shadows” of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate “shadows.” The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the “shadows” calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds. PMID:26335709

  4. Biological data analysis as an information theory problem: multivariable dependence measures and the shadows algorithm.

    PubMed

    Sakhanenko, Nikita A; Galas, David J

    2015-11-01

    Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables.  The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the "shadows" of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate "shadows." The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the "shadows" calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds. PMID:26335709

  5. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    SciTech Connect

    Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  6. Musical beauty and information compression: Complex to the ear but simple to the mind?

    PubMed Central

    2011-01-01

    Background The biological origin of music, its universal appeal across human cultures and the cause of its beauty remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is not? Possible answers to these questions will be framed in the context of Information Theory. Presentation of the Hypothesis The entire life-long sensory data stream of a human is enormous. The adaptive solution to this problem of scale is information compression, thought to have evolved to better handle, interpret and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some point in evolution, the practice of successful information compression became linked to the physiological reward system. I hypothesise that the establishment of this "compression and pleasure" connection paved the way for musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio compression had become intrinsically pleasurable in its own right. Testing the Hypothesis For a range of compositions, empirically determine the relationship between the listener's pleasure and "lossless" audio compression. I hypothesise that enduring musical masterpieces will possess an interesting objective property: despite apparent complexity, they will also exhibit high compressibility. Implications of the Hypothesis Artistic masterpieces and deep Scientific insights share the common process of data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical geniuses are skilled in composing music that appears highly complex to

  7. Simplifying Causal Complexity: How Interactions between Modes of Causal Induction and Information Availability Lead to Heuristic-Driven Reasoning

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Tutwiler, M. Shane

    2014-01-01

    This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…

  8. A Comparison of Prose and Algorithms for Presenting Complex Instructions. Document Design Project, Technical Report No. 17.

    ERIC Educational Resources Information Center

    Holland, V. Melissa; Rose, Andrew

    Complex conditional instructions ("if X, then do Y") are prevalent in public documents, where they typically appear in prose form. Results of two previous studies have shown that conditional instructions become very difficult to process as the structure becomes more complex. A study was designed to investigate whether this difficulty can be…

  9. ePhenotyping for Abdominal Aortic Aneurysm in the Electronic Medical Records and Genomics (eMERGE) Network: Algorithm Development and Konstanz Information Miner Workflow

    PubMed Central

    Borthwick, Kenneth M; Smelser, Diane T; Bock, Jonathan A; Elmore, James R; Ryer, Evan J; Ye, Zi; Pacheco, Jennifer A.; Carrell, David S.; Michalkiewicz, Michael; Thompson, William K; Pathak, Jyotishman; Bielinski, Suzette J; Denny, Joshua C; Linneman, James G; Peissig, Peggy L; Kho, Abel N; Gottesman, Omri; Parmar, Harpreet; Kullo, Iftikhar J; McCarty, Catherine A; Böttinger, Erwin P; Larson, Eric B; Jarvik, Gail P; Harley, John B; Bajwa, Tanvir; Franklin, David P; Carey, David J; Kuivaniemi, Helena; Tromp, Gerard

    2015-01-01

    Background and objective We designed an algorithm to identify abdominal aortic aneurysm cases and controls from electronic health records to be shared and executed within the “electronic Medical Records and Genomics” (eMERGE) Network. Materials and methods Structured Query Language, was used to script the algorithm utilizing “Current Procedural Terminology” and “International Classification of Diseases” codes, with demographic and encounter data to classify individuals as case, control, or excluded. The algorithm was validated using blinded manual chart review at three eMERGE Network sites and one non-eMERGE Network site. Validation comprised evaluation of an equal number of predicted cases and controls selected at random from the algorithm predictions. After validation at the three eMERGE Network sites, the remaining eMERGE Network sites performed verification only. Finally, the algorithm was implemented as a workflow in the Konstanz Information Miner, which represented the logic graphically while retaining intermediate data for inspection at each node. The algorithm was configured to be independent of specific access to data and was exportable (without data) to other sites. Results The algorithm demonstrated positive predictive values (PPV) of 92.8% (CI: 86.8-96.7) and 100% (CI: 97.0-100) for cases and controls, respectively. It performed well also outside the eMERGE Network. Implementation of the transportable executable algorithm as a Konstanz Information Miner workflow required much less effort than implementation from pseudo code, and ensured that the logic was as intended. Discussion and conclusion This ePhenotyping algorithm identifies abdominal aortic aneurysm cases and controls from the electronic health record with high case and control PPV necessary for research purposes, can be disseminated easily, and applied to high-throughput genetic and other studies. PMID:27054044

  10. Neuropsychological Study of FASD in a Sample of American Indian Children: Processing Simple Versus Complex Information

    PubMed Central

    Aragón, Alfredo S.; Kalberg, Wendy O.; Buckley, David; Barela-Scott, Lindsey M.; Tabachnick, Barbara G.; May, Philip A.

    2010-01-01

    Background While a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Methods Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine motor skills. Results Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined “a priori” based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared to controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex

  11. Robust synchronization of complex networks with uncertain couplings and incomplete information

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Wang, Zidong; Alsaadi, Fuad E.

    2016-07-01

    The mean square exponential (MSE) synchronization problem is investigated in this paper for complex networks with simultaneous presence of uncertain couplings and incomplete information, which comprise both the randomly occurring delay and the randomly occurring non-linearities. The network considered is uncertain with time-varying stochastic couplings. The randomly occurring delay and non-linearities are modelled by two Bernoulli-distributed white sequences with known probabilities to better describe realistic complex networks. By utilizing the coordinate transformation, the addressed complex network can be exponentially synchronized in the mean square if the MSE stability of a transformed subsystem can be assured. The stability problem is studied firstly for the transformed subsystem based on the Lyapunov functional method. Then, an easy-to-verify sufficient criterion is established by further decomposing the transformed system, which embodies the joint impacts of the single-node dynamics, the network topology and the statistical quantities of the uncertainties on the synchronization of the complex network. Numerical examples are exploited to illustrate the effectiveness of the proposed methods.

  12. The utility of accurate mass and LC elution time information in the analysis of complex proteomes

    SciTech Connect

    Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.

    2005-08-01

    Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.

  13. 'Selfish herds' of guppies follow complex movement rules, but not when information is limited.

    PubMed

    Kimbell, Helen S; Morrell, Lesley J

    2015-10-01

    Under the threat of predation, animals can decrease their level of risk by moving towards other individuals to form compact groups. A significant body of theoretical work has proposed multiple movement rules, varying in complexity, which might underlie this process of aggregation. However, if and how animals use these rules to form compact groups is still not well understood, and how environmental factors affect the use of these rules even less so. Here, we evaluate the success of different movement rules, by comparing their predictions with the movement seen when shoals of guppies (Poecilia reticulata) form under the threat of predation. We repeated the experiment in a turbid environment to assess how the use of the movement rules changed when visual information is reduced. During a simulated predator attack, guppies in clear water used complex rules that took multiple neighbours into account, forming compact groups. In turbid water, the difference between all rule predictions and fish movement paths increased, particularly for complex rules, and the resulting shoals were more fragmented than in clear water. We conclude that guppies are able to use complex rules to form dense aggregations, but that environmental factors can limit their ability to do so. PMID:26400742

  14. Online Community Detection for Large Complex Networks

    PubMed Central

    Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian

    2014-01-01

    Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683

  15. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs

    PubMed Central

    Cameron, Delroy; Sheth, Amit P.; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A.

    2015-01-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and “intelligible constructs” not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving

  16. Adaptation and information in ontogenesis and phylogenesis. Increase of complexity and efficiency.

    PubMed

    Azzone, G F

    1997-01-01

    Adaptations during phylogenesis or ontogenesis can occur either by maintaining constant or by increasing the informational content of the organism. In the former case the increasing adaptations to external perturbation are achieved by increasing the rate of genome replication; the increased amount of DNA reflects an increase of total but not of law informational content. In the latter case the adaptations are achieved by either istructionist or evolutionary mechanism or a combination of both. Evolutionary adaptations occur during ontogenesis mainly in the brain-mind, immunological and receptor systems and involve a repertoire of receptors that are., clonally distributed, genome-conditioned and amplified by somatic mutation. Specificity and intensity of responses are achieved a posteriori as a result of natural selection of the clones. The major adaptations during phylogenesis are accompanied by increased complexity. They have been attributed to shifts, short in time and space, against the entropic drive and thus occur notwithstanding the entropic drive and the second law of thermodynamics. The alternative view, is that the generation of complexity is due to the second law of thermodynamics in its extended formulation which includes Prigogine's theorem of minimum entropy production. This view requires however that natural selection provides the biological system with structures that bring the reactions within Onsager's range. The hierarchical organization of the natural world thus reflects a stratified thermodynamic stability. As the evolutionary adaptations generate new information they may be assimilated to Maxwell demon type of processes. PMID:9646724

  17. Impact of communication and information on a complex heterogeneous closed water catchment environment

    NASA Astrophysics Data System (ADS)

    Tisdell, John G.; Ward, John R.; Capon, Tim

    2004-09-01

    This paper uses an experimental design that combines the use of an environmental levy with community involvement in the formation of group agreements and strategies to explore the impact of information and communication on water use in a complex heterogeneous environment. Participants in the experiments acted as farmers faced with monthly water demands, uncertain rainfall, possible crop loss, and the possibility of trading in water entitlements. The treatments included (1) no information on environmental consequences of extraction, (2) the provision of monthly aggregate environmental information, (3) the provision of monthly aggregate extraction information and a forum for discussion, and (4) the public provision of individual extraction information and a forum for discussion giving rise to potential verbal peer sanctions. To account for the impact of trade, the treatments were blocked into three market types: (1) no trade, (2) open call auctions, and (3) closed call auctions. The cost to the community of altering the natural flow regime to meet extractive demand was socialized through the imposition of an environmental levy equally imposed on all players.

  18. A Comparison of Geographic Information Systems, Complex Networks, and Other Models for Analyzing Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher

    2005-01-01

    This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.

  19. Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers

    NASA Astrophysics Data System (ADS)

    Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

    2009-03-01

    We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

  20. Markov and non-Markov processes in complex systems by the dynamical information entropy

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  1. SHARING AND DEPLOYING INNOVATIVE INFORMATION TECHNOLOGY SOLUTIONS TO MANAGE WASTE ACROSS THE DOE COMPLEX

    SciTech Connect

    Crolley, R.; Thompson, M.

    2011-01-31

    There has been a need for a faster and cheaper deployment model for information technology (IT) solutions to address waste management needs at US Department of Energy (DOE) complex sites for years. Budget constraints, challenges in deploying new technologies, frequent travel, and increased job demands for existing employees have prevented IT organizations from staying abreast of new technologies or deploying them quickly. Despite such challenges, IT organizations have added significant value to waste management handling through better worker safety, tracking, characterization, and disposition at DOE complex sites. Systems developed for site-specific missions have broad applicability to waste management challenges and in many cases have been expanded to meet other waste missions. Radio frequency identification (RFID) and global positioning satellite (GPS)-enabled solutions have reduced the risk of radiation exposure and safety risks. New web-based and mobile applications have enabled precision characterization and control of nuclear materials. These solutions have also improved operational efficiencies and shortened schedules, reduced cost, and improved regulatory compliance. Collaboration between US Department of Energy (DOE) complex sites is improving time to delivery and cost efficiencies for waste management missions with new information technologies (IT) such as wireless computing, global positioning satellite (GPS), and radio frequency identification (RFID). Integrated solutions developed at separate DOE complex sites by new technology Centers of Excellence (CoE) have increased material control and accountability, worker safety, and environmental sustainability. CoEs offer other DOE sister sites significant cost and time savings by leveraging their technology expertise in project scoping, implementation, and ongoing operations.

  2. Describing the Complexity of Systems: Multivariable “Set Complexity” and the Information Basis of Systems Biology

    PubMed Central

    Sakhanenko, Nikita A.; Skupin, Alexander; Ignac, Tomasz

    2014-01-01

    Abstract Context dependence is central to the description of complexity. Keying on the pairwise definition of “set complexity,” we use an information theory approach to formulate general measures of systems complexity. We examine the properties of multivariable dependency starting with the concept of interaction information. We then present a new measure for unbiased detection of multivariable dependency, “differential interaction information.” This quantity for two variables reduces to the pairwise “set complexity” previously proposed as a context-dependent measure of information in biological systems. We generalize it here to an arbitrary number of variables. Critical limiting properties of the “differential interaction information” are key to the generalization. This measure extends previous ideas about biological information and provides a more sophisticated basis for the study of complexity. The properties of “differential interaction information” also suggest new approaches to data analysis. Given a data set of system measurements, differential interaction information can provide a measure of collective dependence, which can be represented in hypergraphs describing complex system interaction patterns. We investigate this kind of analysis using simulated data sets. The conjoining of a generalized set complexity measure, multivariable dependency analysis, and hypergraphs is our central result. While our focus is on complex biological systems, our results are applicable to any complex system. PMID:24377753

  3. Enhanced Community Structure Detection in Complex Networks with Partial Background Information

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Yuan; Sun, Kai-Di; Wang, Si-Qi

    2013-11-01

    Community structure detection in complex networks is important since it can help better understand the network topology and how the network works. However, there is still not a clear and widely-accepted definition of community structure, and in practice, different models may give very different results of communities, making it hard to explain the results. In this paper, different from the traditional methodologies, we design an enhanced semi-supervised learning framework for community detection, which can effectively incorporate the available prior information to guide the detection process and can make the results more explainable. By logical inference, the prior information is more fully utilized. The experiments on both the synthetic and the real-world networks confirm the effectiveness of the framework.

  4. An efficient approach to the deployment of complex open source information systems

    PubMed Central

    Cong, Truong Van Chi; Groeneveld, Eildert

    2011-01-01

    Complex open source information systems are usually implemented as component-based software to inherit the available functionality of existing software packages developed by third parties. Consequently, the deployment of these systems not only requires the installation of operating system, application framework and the configuration of services but also needs to resolve the dependencies among components. The problem becomes more challenging when the application must be installed and used on different platforms such as Linux and Windows. To address this, an efficient approach using the virtualization technology is suggested and discussed in this paper. The approach has been applied in our project to deploy a web-based integrated information system in molecular genetics labs. It is a low-cost solution to benefit both software developers and end-users. PMID:22102770

  5. The information-expert system for complex diagnostics and researches of technological plasma

    SciTech Connect

    Kresnin, Yu.A.; Stervoedov, S.N.

    1996-12-31

    The information-expert system for complex diagnostics and researches of technological plasma includes closely connected hardware and program part. The hardware consists of the set of intelligent sensors, possessing optical isolation on information channels, and functional modules, incorporated crate CAMAC. Crate is connected by serial interface with IBM-compatible computer. The intelligent sensors are realized on the basis of microcontroller Intel MCS51. They are used for multisensor and spectroscopical measurements of plasma parameters, laser measurement of plasma etched surfaces thickness, measurements of parameters of generators and power supplies of plasma sources. The information from the sensor sis sent on functional modules for preliminary processing and compression, and further, through controller crate--in computer. The program part provides the exchange by information of computer with crate, restores the amplitude-frequent and temporary characteristics of signals, compares them with chosen models of technological process, produces the recommendations on change of operating modes, optimizes technological process as a whole and carries out the documenting of researches.

  6. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks.

    PubMed

    Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X

    2015-01-01

    Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764

  7. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    NASA Technical Reports Server (NTRS)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  8. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks

    PubMed Central

    Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.

    2015-01-01

    Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764

  9. Suppression of epidemic spreading in complex networks by local information based behavioral responses

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Feng; Xie, Jia-Rong; Tang, Ming; Lai, Ying-Cheng

    2014-12-01

    The interplay between individual behaviors and epidemic dynamics in complex networks is a topic of recent interest. In particular, individuals can obtain different types of information about the disease and respond by altering their behaviors, and this can affect the spreading dynamics, possibly in a significant way. We propose a model where individuals' behavioral response is based on a generic type of local information, i.e., the number of neighbors that has been infected with the disease. Mathematically, the response can be characterized by a reduction in the transmission rate by a factor that depends on the number of infected neighbors. Utilizing the standard susceptible-infected-susceptible and susceptible-infected-recovery dynamical models for epidemic spreading, we derive a theoretical formula for the epidemic threshold and provide numerical verification. Our analysis lays on a solid quantitative footing the intuition that individual behavioral response can in general suppress epidemic spreading. Furthermore, we find that the hub nodes play the role of "double-edged sword" in that they can either suppress or promote outbreak, depending on their responses to the epidemic, providing additional support for the idea that these nodes are key to controlling epidemic spreading in complex networks.

  10. A lipoprotein/β-barrel complex monitors lipopolysaccharide integrity transducing information across the outer membrane

    PubMed Central

    Konovalova, Anna; Mitchell, Angela M; Silhavy, Thomas J

    2016-01-01

    Lipoprotein RcsF is the OM component of the Rcs envelope stress response. RcsF exists in complexes with β-barrel proteins (OMPs) allowing it to adopt a transmembrane orientation with a lipidated N-terminal domain on the cell surface and a periplasmic C-terminal domain. Here we report that mutations that remove BamE or alter a residue in the RcsF trans-lumen domain specifically prevent assembly of the interlocked complexes without inactivating either RcsF or the OMP. Using these mutations we demonstrate that these RcsF/OMP complexes are required for sensing OM outer leaflet stress. Using mutations that alter the positively charged surface-exposed domain, we show that RcsF monitors lateral interactions between lipopolysaccharide (LPS) molecules. When these interactions are disrupted by cationic antimicrobial peptides, or by the loss of negatively charged phosphate groups on the LPS molecule, this information is transduced to the RcsF C-terminal signaling domain located in the periplasm to activate the stress response. DOI: http://dx.doi.org/10.7554/eLife.15276.001 PMID:27282389

  11. A lipoprotein/β-barrel complex monitors lipopolysaccharide integrity transducing information across the outer membrane.

    PubMed

    Konovalova, Anna; Mitchell, Angela M; Silhavy, Thomas J

    2016-01-01

    Lipoprotein RcsF is the OM component of the Rcs envelope stress response. RcsF exists in complexes with β-barrel proteins (OMPs) allowing it to adopt a transmembrane orientation with a lipidated N-terminal domain on the cell surface and a periplasmic C-terminal domain. Here we report that mutations that remove BamE or alter a residue in the RcsF trans-lumen domain specifically prevent assembly of the interlocked complexes without inactivating either RcsF or the OMP. Using these mutations we demonstrate that these RcsF/OMP complexes are required for sensing OM outer leaflet stress. Using mutations that alter the positively charged surface-exposed domain, we show that RcsF monitors lateral interactions between lipopolysaccharide (LPS) molecules. When these interactions are disrupted by cationic antimicrobial peptides, or by the loss of negatively charged phosphate groups on the LPS molecule, this information is transduced to the RcsF C-terminal signaling domain located in the periplasm to activate the stress response. PMID:27282389

  12. An algorithm of geophysical data inversion based on non-probabilistic presentation of a priori information and definition of Pareto-optimality

    NASA Astrophysics Data System (ADS)

    Kozlovskaya, Elena

    2000-06-01

    This paper presents an inversion algorithm that can be used to solve a wide range of geophysical nonlinear inverse problems. The algorithm in based upon the principle of a direct search for the optimal solution in the parameter space. The main difference of the algorithm from existing techniques such as genetic algorithms and simulated annealing is that the optimum search is performed under control of a priori information formulated as a fuzzy set in the parameter space. In such a formulation the inverse problem becomes a multiobjective optimization problem with two objective functions, one of them is a membership function of the fuzzy set of feasible solutions, the other is the conditional probability density function of the observed data. The solution to such a problem is a set of Pareto optimal solutions that is constructed in the parameter space by a three-stage search procedure. The advantage of the proposed technique is that it provides the possibility of involving a wide range of non-probabilistic a priori information into the inversion procedure and can be applied to the solution of strongly nonlinear problems. It allows one to decrease the number of forward-problem calculations due to selective sampling of trial points from the parameter space. The properties of the algorithm are illustrated with an application to a local earthquake hypocentre location problem with synthetic and real data.

  13. Mitigation of Bias in Inversion of Complex Earthquake without Prior Information of Detailed Fault Geometry

    NASA Astrophysics Data System (ADS)

    Kasahara, A.; Yagi, Y.

    2014-12-01

    Rupture process of earthquake derived from geophysical observations is important information to understand nature of earthquake and assess seismic hazard. Finite fault inversion is a commonly applied method to construct seismic source model. In conventional inversion, fault is approximated by a simple fault surface even if rupture of real earthquake should propagate along non-planar complex fault. In the conventional inversion, complex rupture kinematics is approximated by limited model parameters that only represent slip on a simple fault surface. This over simplification may cause biased and hence misleading solution. MW 7.7 left-lateral strike-slip earthquake occurred in southwestern Pakistan on 2013-09-24 might be one of exemplar event to demonstrate the bias. For this earthquake, northeastward rupture propagation was suggested by a finite fault inversion of teleseismic body and long period surface waves with a single planer fault (USGS). However, surface displacement field measured from cross-correlation of optical satellite images and back-projection imaging revealed that rupture was unilaterally propagated toward southwest on a non-planer fault (Avouac et.al., 2014). To mitigate the bias, more flexible source parameterization should be employed. We extended multi-time window finite fault method to represent rupture kinematics on a complex fault. Each spatio-temporal knot has five degrees of freedom and is able to represent arbitrary strike, dip, rake, moment release rate and CLVD component. Detailed fault geometry for a source fault is not required in our method. The method considers data covariance matrix with uncertainty of Green's function (Yagi and Fukahata, 2011) to obtain stable solution. Preliminary results show southwestward rupture propagation and focal mechanism change that is consistent with fault trace. The result suggests usefulness of the flexible source parameterization for inversion of complex events.

  14. Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits

    PubMed Central

    Meyer-Lindenberg, Andreas; Weinberger, Daniel R.; Moore, Simon W.; Bullmore, Edward T.

    2010-01-01

    Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks. PMID:20421990

  15. Power-law ansatz in complex systems: Excessive loss of information.

    PubMed

    Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming

    2015-12-01

    The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring. PMID:26764792

  16. Power-law ansatz in complex systems: Excessive loss of information

    NASA Astrophysics Data System (ADS)

    Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming

    2015-12-01

    The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.

  17. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  18. Use of multitemporal information to improve classification performance of TM scenes in complex terrain

    NASA Astrophysics Data System (ADS)

    Conese, Claudio; Maselli, Fabio

    The discrimination of land cover types by means of satellite remotely sensed data is a very challenging task in extremely complex and heterogeneous environments where the surfaces are hardly spectrally identifiable. In these cases the use of multitemporal acquisitions could be expected to enhance substantially classification performance with respect to single scenees, when inserted in procedures which exploit all the information available. The present work discusses this hypothesis and employs three TM scenes of gently undulated terrain in Tuscany (central Italy) from different seasons of one year (February, May and August). The three phenological stages of the vegetated surfaces provided additional statistical information with respect to single scenes. Classification was tested with gaussian maximum likelihood classifiers, both separately on each of the three TM passages and, suitably adapted, on the whole multitemporal set. An iterative process using probabilities estimated from the error matrices of previous single image classifications was also tested. Results of tests show that multitemporal information greatly improves classification performance, particularly when using the statistical procedure described.

  19. Linguistic complexity and information structure in Korean: Evidence from eye-tracking during reading

    PubMed Central

    Lee, Yoonhyoung; Lee, Hanjung; Gordon, Peter C.

    2006-01-01

    The nature of the memory processes that support language comprehension and the manner in which information packaging influences online sentence processing were investigated in three experiments that used eye-tracking during reading to measure the ease of understanding complex sentences in Korean. All three experiments examined reading of embedded complement sentences; the third experiment additionally examined reading of sentences with object-modifying, object-extracted relative clauses. In Korean, both of these structures place two NPs with nominative case marking early in the sentence, with the embedded and matrix verbs following later. The type (pronoun, name or description) of these two critical NPs was varied in the experiments. When the initial NPs were of the same type, comprehension was slowed after participants had read the sentence-final verbs, a finding that supports the view that working memory in language comprehension is constrained by similarity-based interference during the retrieval of information necessary to determine the syntactic or semantic relations between noun phrases and verb phrases. Ease of comprehension was also influenced by the association between type of NP and syntactic position, with the best performance being observed when more definite NPs (pronouns and names) were in a prominent syntactic position (e.g., matrix subject) and less definite NPs (descriptions) were in a non-prominent syntactic position (embedded subject). This pattern provides evidence that the interpretation of sentences is facilitated by consistent packaging of information in different linguistic elements. PMID:16970936

  20. Exploring the velocity distribution of debris flows: An iteration algorithm based approach for complex cross-sections

    NASA Astrophysics Data System (ADS)

    Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong

    2015-07-01

    The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.

  1. Algorithms for GPU-based molecular dynamics simulations of complex fluids: Applications to water, mixtures, and liquid crystals.

    PubMed

    Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M

    2015-09-15

    A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. PMID:26174435

  2. Using complex networks towards information retrieval and diagnostics in multidimensional imaging

    PubMed Central

    Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen

    2015-01-01

    We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047

  3. Using complex networks towards information retrieval and diagnostics in multidimensional imaging

    NASA Astrophysics Data System (ADS)

    Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen

    2015-12-01

    We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.

  4. Examining age differences in performance of a complex information search and retrieval task.

    PubMed

    Czaja, S J; Sharit, J; Ownby, R; Roth, D L; Nair, S

    2001-12-01

    This study examined age differences in performance of a complex information search and retrieval task by using a simulated real-world task typical of those performed by customer service representatives. The study also investigated the influence of task experience and the relationships between cognitive abilities and task performance. One hundred seventeen participants from 3 age groups, younger (20-39 years). middle-aged (40-59 years), and older (60-75 years), performed the task for 3 days. Significant age differences were found for all measures of task performance with the exception of navigational efficiency and number of problems correctly navigated per attempt. There were also effects of task experience. The findings also indicated significant direct and indirect relations between component cognitive abilities and task performance. PMID:11766912

  5. Synchronization, TIGoRS, and Information Flow in Complex Systems: Dispositional Cellular Automata.

    PubMed

    Sulis, William H

    2016-04-01

    Synchronization has a long history in physics where it refers to the phase matching of two identical oscillators. This notion has been extensively studied in physics as well as in biology, where it has been applied to such widely varying phenomena as the flashing of fireflies and firing of neurons in the brain. Human behavior, however, may be recurrent but it is not oscillatory even though many physiological systems do exhibit oscillatory tendencies. Moreover, much of human behaviour is collaborative and cooperative, where the individual behaviours may be distinct yet contemporaneous (if not simultaneous) and taken collectively express some functionality. In the context of behaviour, the important aspect is the repeated co-occurrence in time of behaviours that facilitate the propagation of information or of functionality, regardless of whether or not these behaviours are similar or identical. An example of this weaker notion of synchronization is transient induced global response synchronization (TIGoRS). Previous work has shown that TIGoRS is a ubiquitous phenomenon among complex systems, enabling them to stably parse environmental transients into salient units to which they stably respond. This leads to the notion of Sulis machines, which emergently generate a primitive linguistic structure through their dynamics. This article reviews the notion of TIGoRS and its expression in several complex systems models including tempered neural networks, driven cellular automata and cocktail party automata. The emergent linguistics of Sulis machines are discussed. A new class of complex systems model, the dispositional cellular automaton is introduced. A new metric for TIGoRS, the excess synchronization, is introduced and applied to the study of TIGoRS in dispositional cellular automata. It is shown that these automata exhibit a nonlinear synchronization response to certain perturbing transients. PMID:27033136

  6. A New Socio-technical Model for Studying Health Information Technology in Complex Adaptive Healthcare Systems

    PubMed Central

    Sittig, Dean F.; Singh, Hardeep

    2011-01-01

    Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an 8-dimensional model specifically designed to address the socio-technical challenges involved in design, development, implementation, use, and evaluation of HIT within complex adaptive healthcare systems. The 8 dimensions are not independent, sequential, or hierarchical, but rather are interdependent and interrelated concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support, and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the “language” of clinical applications. The human computer interface includes all aspects of the computer that users can see, touch, or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end-user, including potential patient-users. Workflow and communication are the processes or steps involved in assuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organizational features (e.g., policies, procedures, and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation. PMID:20959322

  7. Electron transfer dissociation provides higher-order structural information of native and partially unfolded protein complexes.

    PubMed

    Lermyte, Frederik; Sobott, Frank

    2015-08-01

    Top-down sequencing approaches are becoming ever more popular for protein characterization, due to the ability to distinguish and characterize different protein isoforms. Under non-denaturing conditions, electron transfer dissociation (ETD) can furthermore provide important information on the exposed surface of proteins or complexes, thereby contributing to the characterization of their higher-order structure. Here, we investigate this approach using top-down ETD of tetrameric hemoglobin, concanavalin A, and alcohol dehydrogenase combined with ion mobility (IM) on a commercially available quadrupole/ion mobility/time-of-flight instrument (Waters Synapt G2). By applying supplemental activation in the transfer cell (post-IM), we release ETD fragments and attain good sequence coverage in the exposed terminal regions of the protein. We investigate the correlation between observed sites of fragmentation with regions of solvent accessibility, as derived from the crystal structure. Ion acceleration prior to ETD is also used to cause collision-induced unfolding (CIU) of the complexes without monomer ejection, as evidenced by the IM profiles. These partially unfolded tetramers show efficient fragmentation in some regions which are not sequenced under more gentle MS conditions. We show that by increasing CIU in small increments and monitoring the changes in the fragmentation pattern, it is possible to follow the initial steps of gas-phase protein unfolding. Fragments from partially unfolded protein complexes are released immediately after electron transfer, prior to IM (they do not share the drift time of their precursor), and observed without the need for supplemental activation. This is further evidence that the higher-order structure in these protein regions has been disrupted. PMID:26081219

  8. Community detection based on modularity and an improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao

    2013-03-01

    Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

  9. Selection of pairings reaching evenly across the data (SPREAD): A simple algorithm to design maximally informative fully crossed mating experiments.

    PubMed

    Zimmerman, K; Levitis, D; Addicott, E; Pringle, A

    2016-02-01

    We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets. PMID:26419337

  10. Development of a generalized algorithm of satellite remote sensing using multi-wavelength and multi-pixel information (MWP method) for aerosol properties by satellite-borne imager

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.

    2014-12-01

    We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.

  11. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  12. Robust fundamental frequency estimation in sustained vowels: detailed algorithmic comparisons and information fusion with adaptive Kalman filtering.

    PubMed

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A; Fox, Cynthia; Ramig, Lorraine O; Clifford, Gari D

    2014-05-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F(0)) of speech signals. This study examines ten F(0) estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F(0) in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F(0) estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F(0) estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F(0) estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F(0) estimation is required. PMID:24815269

  13. Further understanding of complex information processing in verbal adolescents and adults with autism spectrum disorders.

    PubMed

    Williams, Diane L; Minshew, Nancy J; Goldstein, Gerald

    2015-10-01

    More than 20 years ago, Minshew and colleagues proposed the Complex Information Processing model of autism in which the impairment is characterized as a generalized deficit involving multiple modalities and cognitive domains that depend on distributed cortical systems responsible for higher order abilities. Subsequent behavioral work revealed a related dissociation between concept formation and concept identification in autism suggesting the lack of an underlying organizational structure to manage increases in processing loads. The results of a recent study supported the impact of this relative weakness in conceptual reasoning on adaptive functioning in children and adults with autism. In this study, we provide further evidence of the difficulty relatively able older adolescents and adults with autism have with conceptual reasoning and provide evidence that this characterizes their difference from age- and ability-matched controls with typical development better than their differences in language. For verbal adults with autism, language may serve as a bootstrap or compensatory mechanism for learning but cannot overcome an inherent weakness in concept formation that makes information processing challenging as task demands increase. PMID:26019307

  14. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  15. Shear banding and complex non-linear dynamics in microscopically `informed' models for wormlike micelles (and other complex fluids)

    NASA Astrophysics Data System (ADS)

    Olmsted, Peter

    2004-03-01

    "Shear banding", i.e. flow-induced macroscopic "phase coexistence" or apparent "phase transitions", has been observed in many complex fluids, including wormlike micelles, lamellar systems, associating polymers, and liquid crystals. In this talk I will review this behavior, and discuss a general phenomenology for understanding shear banding and flow-induced phase separation in complex fluids, at a "thermodynamic" level (as opposed to a "statistical mechanics" level). An accurate theory must include the relevant microstructural order parameters, and construct the fully coupled spatially-dependent hydrodynamic equations of motion. Although this has been successfully done for very few model fluids, we can nonetheless obtain general rules for the "phase behavior". Perhaps surprisingly, the interface between coexisting phases plays a crucial role in determining the steady state behavior, and is much more important than its equilibrium counterpart. I will discuss recent work addressed at the kinetics and morphology of wormlike micellar solutions, and touch on models for more complex oscillatory and possibly chaotic systems.

  16. Network complexity as a measure of information processing across resting-state networks: evidence from the Human Connectome Project

    PubMed Central

    McDonough, Ian M.; Nashiro, Kaoru

    2014-01-01

    An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130

  17. When complex is easy on the mind: Internal repetition of visual information in complex objects is a source of perceptual fluency.

    PubMed

    Joye, Yannick; Steg, Linda; Ünal, Ayça Berfu; Pals, Roos

    2016-01-01

    Across 3 studies, we investigated whether visual complexity deriving from internally repeating visual information over many scale levels is a source of perceptual fluency. Such continuous repetition of visual information is formalized in fractal geometry and is a key-property of natural structures. In the first 2 studies, we exposed participants to 3-dimensional high-fractal versus low-fractal stimuli, respectively characterized by a relatively high versus low degree of internal repetition of visual information. Participants evaluated high-fractal stimuli as more complex and fascinating than their low-fractal counterparts. We assessed ease of processing by asking participants to solve effortful puzzles during and after exposure to high-fractal versus low-fractal stimuli. Across both studies, we found that puzzles presented during and after seeing high-fractal stimuli were perceived as the easiest ones to solve and were solved more accurately and faster than puzzles associated with the low-fractal stimuli. In Study 3, we ran the Dot Probe Procedure to rule out that the findings from Study 1 and Study 2 reflected differences in attentional bias between the high-fractal and low-fractal stimuli, rather than perceptual fluency. Overall, our findings confirm that complexity deriving from internal repetition of visual information can be easy on the mind. (PsycINFO Database Record PMID:26322692

  18. Efficient algorithms for the simulation of non-adiabatic electron transfer in complex molecular systems: application to DNA.

    PubMed

    Kubař, Tomáš; Elstner, Marcus

    2013-04-28

    In this work, a fragment-orbital density functional theory-based method is combined with two different non-adiabatic schemes for the propagation of the electronic degrees of freedom. This allows us to perform unbiased simulations of electron transfer processes in complex media, and the computational scheme is applied to the transfer of a hole in solvated DNA. It turns out that the mean-field approach, where the wave function of the hole is driven into a superposition of adiabatic states, leads to over-delocalization of the hole charge. This problem is avoided using a surface hopping scheme, resulting in a smaller rate of hole transfer. The method is highly efficient due to the on-the-fly computation of the coarse-grained DFT Hamiltonian for the nucleobases, which is coupled to the environment using a QM/MM approach. The computational efficiency and partial parallel character of the methodology make it possible to simulate electron transfer in systems of relevant biochemical size on a nanosecond time scale. Since standard non-polarizable force fields are applied in the molecular-mechanics part of the calculation, a simple scaling scheme was introduced into the electrostatic potential in order to simulate the effect of electronic polarization. It is shown that electronic polarization has an important effect on the features of charge transfer. The methodology is applied to two kinds of DNA sequences, illustrating the features of transfer along a flat energy landscape as well as over an energy barrier. The performance and relative merit of the mean-field scheme and the surface hopping for this application are discussed. PMID:23493847

  19. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  20. A qualitative analysis of information sharing for children with medical complexity within and across health care organizations

    PubMed Central

    2014-01-01

    Background Children with medical complexity (CMC) are characterized by substantial family-identified service needs, chronic and severe conditions, functional limitations, and high health care use. Information exchange is critically important in high quality care of complex patients at high risk for poor care coordination. Written care plans for CMC are an excellent test case for how well information sharing is currently occurring. The purpose of this study was to identify the barriers to and facilitators of information sharing for CMC across providers, care settings, and families. Methods A qualitative study design with data analysis informed by a grounded theory approach was utilized. Two independent coders conducted secondary analysis of interviews with parents of CMC and health care professionals involved in the care of CMC, collected from two studies of healthcare service delivery for this population. Additional interviews were conducted with privacy officers of associated organizations to supplement these data. Emerging themes related to barriers and facilitators to information sharing were identified by the two coders and the research team, and a theory of facilitators and barriers to information exchange evolved. Results Barriers to information sharing were related to one of three major themes; 1) the lack of an integrated, accessible, secure platform on which summative health care information is stored, 2) fragmentation of the current health system, and 3) the lack of consistent policies, standards, and organizational priorities across organizations for information sharing. Facilitators of information sharing were related to improving accessibility to a common document, expanding the use of technology, and improving upon a structured communication plan. Conclusions Findings informed a model of how various barriers to information sharing interact to prevent optimal information sharing both within and across organizations and how the use of technology to

  1. Automatic remote sensing detection of the convective boundary layer structure over flat and complex terrain using the novel PathfinderTURB algorithm

    NASA Astrophysics Data System (ADS)

    Poltera, Yann; Martucci, Giovanni; Hervo, Maxime; Haefele, Alexander; Emmenegger, Lukas; Brunner, Dominik; Henne, stephan

    2016-04-01

    We have developed, applied and validated a novel algorithm called PathfinderTURB for the automatic and real-time detection of the vertical structure of the planetary boundary layer. The algorithm has been applied to a year of data measured by the automatic LIDAR CHM15K at two sites in Switzerland: the rural site of Payerne (MeteoSwiss station, 491 m, asl), and the alpine site of Kleine Scheidegg (KSE, 2061 m, asl). PathfinderTURB is a gradient-based layer detection algorithm, which in addition makes use of the atmospheric variability to detect the turbulent transition zone that separates two low-turbulence regions, one characterized by homogeneous mixing (convective layer) and one above characterized by free tropospheric conditions. The PathfinderTURB retrieval of the vertical structure of the Local (5-10 km, horizontal scale) Convective Boundary Layer (LCBL) has been validated at Payerne using two established reference methods. The first reference consists of four independent human-expert manual detections of the LCBL height over the year 2014. The second reference consists of the values of LCBL height calculated using the bulk Richardson number method based on co-located radio sounding data for the same year 2014. Based on the excellent agreement with the two reference methods at Payerne, we decided to apply PathfinderTURB to the complex-terrain conditions at KSE during 2014. The LCBL height retrievals are obtained by tilting the CHM15K at an angle of 19 degrees with respect to the horizontal and aiming directly at the Sphinx Observatory (3580 m, asl) on the Jungfraujoch. This setup of the CHM15K and the processing of the data done by the PathfinderTURB allows to disentangle the long-transport from the local origin of gases and particles measured by the in-situ instrumentation at the Sphinx Observatory. The KSE measurements showed that the relation amongst the LCBL height, the aerosol layers above the LCBL top and the gas + particle concentration is all but

  2. Acquisition of Instructional Material Information as a Function of Manual Design and Material Complexity.

    ERIC Educational Resources Information Center

    Altman, Reuben; And Others

    The study, with 52 preservice special education teachers, focused on effects of two types of teacher manual design and two levels of material complexity on comprehension of instructional materials utilization. Two materials were selected from an instructional materials collection for less complex material and for more complex material,…

  3. Transfer of information in noise induced transport

    NASA Astrophysics Data System (ADS)

    Sanchez, J. R.; Arizmendi, C. M.

    1999-09-01

    Time correlated fluctuations interacting with a spatial asymmetry potential are sufficient conditions to give rise to transport of Brownian particles. The transfer of information coming from the nonequilibrium bath, viewed as a source of negentropy, give rise to the correlated noise. The algorithmic complexity of an object provides a means of quantifying its information contents. The Kolmogorov information entropy or algorithmic complexity is investigated in order to quantify the transfer of information that occurs in computational models showing noise induced transport. The complexity is measured in terms of the average number of bits per time unit necessary to specify the sequence generated by the system.

  4. A generalized deconvolution algorithm for image reconstruction in positron emission tomography with time-of-flight information (TOFPET)

    SciTech Connect

    Chen, C.T.; Metz, C.E.

    1984-01-01

    Positron emission tomographic systems capable of time-of-flight measurements open new avenues for image reconstruction. Three algorithms have been proposed previously: the most-likely position method (MLP), the confidence weighting method (CW) and the estimated posterior-density weighting method (EPDW). While MLP suffers from poorer noise properties, both CW and EPDW require substantially more computer processing time. Mathematically, the TOFPET image data at any projection angle represents a 2D image blurred by different TOF and detector spatial resolutions in two perpendicular directions. The integration of TOFPET images over all angles produces a preprocessed 2D image which is the convolution of the true image and a rotationally symmetric point spread function (PSF). Hence the tomographic reconstruction problem for TOFPET can be viewed as nothing more than a 2D image processing task to compensate for a known PSF. A new algorithm based on a generalized iterative deconvolution method and its equivalent filters (''Metz filters'') developed earlier for conventional nuclear medicine image processing is proposed for this purpose. The algorithm can be carried out in a single step by an equivalent filter in the frequency domain; therefore, much of the computation time necessary for CW and EPDW is avoided. Results from computer simulation studies show that this new approach provides excellent resolution enhancement at low frequencies, good noise suppression at high frequencies, a reduction of Gibbs' phenomenon due to sharp filter cutoff, and better quantitative measurements than other methods.

  5. An explanatory model of peer education within a complex medicines information exchange setting.

    PubMed

    Klein, Linda A; Ritchie, Jan E; Nathan, Sally; Wutzke, Sonia

    2014-06-01

    Studies of the effectiveness and value of peer education abound, yet there is little theoretical understanding of what lay educators actually do to help their peers. Although different theories have been proposed to explain components of peer education, a more complete explanatory model has not been established empirically that encompasses the many aspects of peer education and how these may operate together. The Australian Seniors Quality Use of Medicines Peer Education Program was developed, in conjunction with community partners, to improve understanding and management of medicines among older people - an Australian and international priority. This research investigated how peer educators facilitated learning about quality use of medicines among older Australians. Participatory action research was undertaken with volunteer peer educators, using a multi-site case study design within eight geographically-defined locations. Qualitative data from 27 participatory meetings with peer educators included transcribed audio recordings and detailed observational and interpretive notes, which were analysed using a grounded theory approach. An explanatory model arising from the data grouped facilitation of peer learning into four broad mechanisms: using educator skills; offering a safe place to learn; pushing for change; and reflecting on self. Peer educators' life experience as older people who have taken medicines was identified as an overarching contributor to peer learning. As lay persons, peer educators understood the potential disempowerment felt when seeking medicines information from health professionals and so were able to provide unique learning experiences that encouraged others to be 'active partners' in their own medicines management. These timely findings are linked to existing education and behaviour change theories, but move beyond these by demonstrating how the different elements of what peer educators do fit together. In-depth examination of peer educators

  6. Case Study: Hidden Complexity of Medicines Use: Information Provided by a Person with Intellectual Disability and Diabetes to a Pharmacist

    ERIC Educational Resources Information Center

    Flood, Bernadette; Henman, Martin C.

    2015-01-01

    People with intellectual disabilities may be "invisible" to pharmacists. They are a complex group of patients many of whom have diabetes. Pharmacists may have little experience of the challenges faced by this high risk group of patients who may be prescribed high risk medications. This case report details information supplied by Pat, a…

  7. A Measure of Systems Engineering Effectiveness in Government Acquisition of Complex Information Systems: A Bayesian Belief Network-Based Approach

    ERIC Educational Resources Information Center

    Doskey, Steven Craig

    2014-01-01

    This research presents an innovative means of gauging Systems Engineering effectiveness through a Systems Engineering Relative Effectiveness Index (SE REI) model. The SE REI model uses a Bayesian Belief Network to map causal relationships in government acquisitions of Complex Information Systems (CIS), enabling practitioners to identify and…

  8. Inferring generalized time-dependent complex Ginzburg-Landau equations from modulus and gauge-field information

    SciTech Connect

    Yu, Rotha P.; Paganin, David M.; Morgan, Michael J.

    2008-04-01

    We develop a means to 'measure' the generalized 2+1-dimensional time-dependent complex Ginzburg-Landau equation, given both the wave-function modulus and gauge-field information over a series of five planes that are closely spaced in time. The methodology is tested using simulated data for a thin-film high-temperature superconductor in the Meissner state.

  9. Applying Observations from Technological Transformations in Complex Adaptive Systems to Inform Health Policy on Technology Adoption

    PubMed Central

    Phillips, Andrew B.; Merrill, Jacqueline

    2012-01-01

    Many complex markets such as banking and manufacturing have benefited significantly from technology adoption. Each of these complex markets experienced increased efficiency, quality, security, and customer involvement as a result of technology transformation in their industry. Healthcare has not benefited to the same extent. We provide initial findings from a policy analysis of complex markets and the features of these transformations that can influence health technology adoption and acceptance. PMID:24199112

  10. Applying observations from technological transformations in complex adaptive systems to inform health policy on technology adoption.

    PubMed

    Phillips, Andrew B; Merrill, Jacqueline

    2012-01-01

    Many complex markets such as banking and manufacturing have benefited significantly from technology adoption. Each of these complex markets experienced increased efficiency, quality, security, and customer involvement as a result of technology transformation in their industry. Healthcare has not benefited to the same extent. We provide initial findings from a policy analysis of complex markets and the features of these transformations that can influence health technology adoption and acceptance. PMID:24199112

  11. Cognitive Complexity and Theatrical Information Processing: Audience Responses to "The Homecoming" and "Private Lives."

    ERIC Educational Resources Information Center

    Gourd, William

    Confined to the interaction of complexity/simplicity of the stimulus play, this paper both focuses on the differing patterns of response between cognitively complex and cognitively simple persons to the characters in "The Homecoming" and "Private Lives" and attempts to determine the responses to specific characters or groups of characters. The…

  12. A Theory of Complex Adaptive Inquiring Organizations: Application to Continuous Assurance of Corporate Financial Information

    ERIC Educational Resources Information Center

    Kuhn, John R., Jr.

    2009-01-01

    Drawing upon the theories of complexity and complex adaptive systems and the Singerian Inquiring System from C. West Churchman's seminal work "The Design of Inquiring Systems" the dissertation herein develops a systems design theory for continuous auditing systems. The dissertation consists of discussion of the two foundational theories,…

  13. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  14. International Students Using Online Information Resources to Learn: Complex Experience and Learning Needs

    ERIC Educational Resources Information Center

    Hughes, Hilary

    2013-01-01

    This paper reports the findings of a qualitative study that investigated 25 international students' use of online information resources for study purposes at two Australian universities. Using an expanded critical incident approach, the study viewed international students through an information literacy lens, as information-using learners.…

  15. The Applications of Genetic Algorithms in Medicine.

    PubMed

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.]. PMID:26676060

  16. The Applications of Genetic Algorithms in Medicine

    PubMed Central

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-01-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060

  17. Toward an Improved Haptic Zooming Algorithm for Graphical Information Accessed by Individuals Who Are Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Rastogi, Ravi; Pawluk, Dianne T. V.

    2013-01-01

    An increasing amount of information content used in school, work, and everyday living is presented in graphical form. Unfortunately, it is difficult for people who are blind or visually impaired to access this information, especially when many diagrams are needed. One problem is that details, even in relatively simple visual diagrams, can be very…

  18. [Fast segmentation algorithm of high resolution remote sensing image based on multiscale mean shift].

    PubMed

    Wang, Lei-Guang; Zheng, Chen; Lin, Li-Yu; Chen, Rong-Yuan; Mei, Tian-Can

    2011-01-01

    Mean Shift algorithm is a robust approach toward feature space analysis and it has been used wildly for natural scene image and medical image segmentation. However, high computational complexity of the algorithm has constrained its application in remote sensing images with massive information. A fast image segmentation algorithm is presented by extending traditional mean shift method to wavelet domain. In order to evaluate the effectiveness of the proposed algorithm, multispectral remote sensing image and synthetic image are utilized. The results show that the proposed algorithm can improve the speed 5-7 times compared to the traditional MS method in the premise of segmentation quality assurance. PMID:21428083

  19. Accurate refinement of docked protein complexes using evolutionary information and deep learning.

    PubMed

    Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit

    2016-06-01

    One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures. PMID:26846813

  20. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.