Methods of information theory and algorithmic complexity for network biology.
Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper
2016-03-01
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Algorithmic Complexity. Volume II.
1982-06-01
works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n
A novel complex valued cuckoo search algorithm.
Zhou, Yongquan; Zheng, Hongqing
2013-01-01
To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified.
Space complexity of estimation of distribution algorithms.
Gao, Yong; Culberson, Joseph
2005-01-01
In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.
2003-04-01
Science, volume 888, 1995, ISSN 0302-9743. [127] Giancoli , Douglas C. General Physics , Prentice Hall, INC, Englewood Cliffs, NJ. [128]Harmon, S. “A...59 3.4 Vulnerability Metrics with physical analogs...theoretical physics for information flow analysis on network and for extraction of patterns of typical network behav- ior. The information traffic can
The Complex Information Process
NASA Astrophysics Data System (ADS)
Taborsky, Edwina
2000-09-01
This paper examines the semiosic development of energy to information within a dyadic reality that operates within the contradictions of both classical and quantum physics. These two realities are examined within the three Peircean modal categories of Firstness, Secondness and Thirdness. The paper concludes that our world cannot operate within either of the two physical realities but instead filiates the two to permit a semiosis or information-generation of complex systems.
Sequence comparisons via algorithmic mutual information
Milosavijevic, A.
1994-12-31
One of the main problems in DNA and protein sequence comparisons is to decide whether observed similarity of two sequences should be explained by their relatedness or by mere presence of some shared internal structure, e.g., shared internal tandem repeats. The standard methods that are based on statistics or classical information theory can be used to discover either internal structure or mutual sequence similarity, but cannot take into account both. Consequently, currently used methods for sequence comparison employ {open_quotes}masking{close_quotes} techniques that simply eliminate sequences that exhibit internal repetitive structure prior to sequence comparisons. The {open_quotes}masking{close_quotes} approach precludes discovery of homologous sequences of moderate or low complexity, which abound at both DNA and protein levels. As a solution to this problem, we propose a general method that is based on algorithmic information theory and minimal length encoding. We show that algorithmic mutual information factors out the sequence similarity that is due to shared internal structure and thus enables discovery of truly related sequences. We extend the recently developed algorithmic significance method to show that significance depends exponentially on algorithmic mutual information.
Information communication on complex networks
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Kawamoto, Hiroki; Maruyama, Takahiro; Morioka, Atsushi; Naganuma, Yuki
2013-02-01
Since communication networks such as the Internet, which is regarded as a complex network, have recently become a huge scale and a lot of data pass through them, the improvement of packet routing strategies for transport is one of the most significant themes in the study of computer networks. It is especially important to find routing strategies which can bear as many traffic as possible without congestion in complex networks. First, using neural networks, we introduce a strategy for packet routing on complex networks, where path lengths and queue lengths in nodes are taken into account within a framework of statistical physics. Secondly, instead of using shortest paths, we propose efficient paths which avoid hubs, nodes with a great many degrees, on scale-free networks with a weight of each node. We improve the heuristic algorithm proposed by Danila et. al. which optimizes step by step routing properties on congestion by using the information of betweenness, the probability of paths passing through a node in all optimal paths which are defined according to a rule, and mitigates the congestion. We confirm the new heuristic algorithm which balances traffic on networks by achieving minimization of the maximum betweenness in much smaller number of iteration steps. Finally, We model virus spreading and data transfer on peer-to-peer (P2P) networks. Using mean-field approximation, we obtain an analytical formulation and emulate virus spreading on the network and compare the results with those of simulation. Moreover, we investigate the mitigation of information traffic congestion in the P2P networks.
The Complexity of Parallel Algorithms,
1985-11-01
Much of this work was done in collaboration with my advisor, Ernst Mayr . He was also supported in part by ONR contract N00014-85-C-0731. F ’. Table...Helinbold and Mayr in their algorithn to compute an optimal two processor schedule [HM2]. One of the promising developments in parallel algorithms is that...lei can be solved by it fast parallel algorithmmmi if the nmlmmmibers are smiall. llehmibold and Mayr JIlM I] have slhowm that. if Ole job timies are
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Predicting complex mineral structures using genetic algorithms.
Mohn, Chris E; Kob, Walter
2015-10-28
We show that symmetry-adapted genetic algorithms are capable of finding the ground state of a range of complex crystalline phases including layered- and incommensurate super-structures. This opens the way for the atomistic prediction of complex crystal structures of functional materials and mineral phases.
An Image Encryption Algorithm Based on Information Hiding
NASA Astrophysics Data System (ADS)
Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu
Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.
Pinning impulsive control algorithms for complex network
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-15
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
C. elegans locomotion analysis using algorithmic information theory.
Skandari, Roghieh; Le Bihan, Nicolas; Manton, Jonathan H
2015-01-01
This article investigates the use of algorithmic information theory to analyse C. elegans datasets. The ability of complexity measures to detect similarity in animals' behaviours is demonstrated and their strengths are compared to methods such as histograms. Introduced quantities are illustrated on a couple of real two-dimensional C. elegans datasets to investigate the thermotaxis and chemotaxis behaviours.
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
Accessing complexity from genome information
NASA Astrophysics Data System (ADS)
Tenreiro Machado, J. A.
2012-06-01
This paper studies the information content of the chromosomes of 24 species. In a first phase, a scheme inspired in dynamical system state space representation is developed. For each chromosome the state space dynamical evolution is shed into a two dimensional chart. The plots are then analyzed and characterized in the perspective of fractal dimension. This information is integrated in two measures of the species' complexity addressing its average and variability. The results are in close accordance with phylogenetics pointing quantitative aspects of the species' genomic complexity.
Entropy, complexity, and spatial information
NASA Astrophysics Data System (ADS)
Batty, Michael; Morphet, Robin; Masucci, Paolo; Stanilov, Kiril
2014-10-01
We pose the central problem of defining a measure of complexity, specifically for spatial systems in general, city systems in particular. The measures we adopt are based on Shannon's (in Bell Syst Tech J 27:379-423, 623-656, 1948) definition of information. We introduce this measure and argue that increasing information is equivalent to increasing complexity, and we show that for spatial distributions, this involves a trade-off between the density of the distribution and the number of events that characterize it; as cities get bigger and are characterized by more events—more places or locations, information increases, all other things being equal. But sometimes the distribution changes at a faster rate than the number of events and thus information can decrease even if a city grows. We develop these ideas using various information measures. We first demonstrate their applicability to various distributions of population in London over the last 100 years, then to a wider region of London which is divided into bands of zones at increasing distances from the core, and finally to the evolution of the street system that characterizes the built-up area of London from 1786 to the present day. We conclude by arguing that we need to relate these measures to other measures of complexity, to choose a wider array of examples, and to extend the analysis to two-dimensional spatial systems.
A fast DFT algorithm using complex integer transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1978-01-01
Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.
Algorithms For Segmentation Of Complex-Amplitude SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Chellappa, Ramalingam
1993-01-01
Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
A fast algorithm for functional mapping of complex traits.
Zhao, Wei; Wu, Rongling; Ma, Chang-Xing; Casella, George
2004-01-01
By integrating the underlying developmental mechanisms for the phenotypic formation of traits into a mapping framework, functional mapping has emerged as an important statistical approach for mapping complex traits. In this note, we explore the feasibility of using the simplex algorithm as an alternative to solve the mixture-based likelihood for functional mapping of complex traits. The results from the simplex algorithm are consistent with those from the traditional EM algorithm, but the simplex algorithm has considerably reduced computational times. Moreover, because of its nonderivative nature and easy implementation with current software, the simplex algorithm enjoys an advantage over the EM algorithm in the dynamic modeling and analysis of complex traits. PMID:15342547
An innovative thinking-based intelligent information fusion algorithm.
Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
FOCUS: a deconvolution method based on algorithmic complexity
NASA Astrophysics Data System (ADS)
Delgado, C.
2006-07-01
A new method for improving the resolution of images is presented. It is based on Occam's razor principle implemented using algorithmic complexity arguments. The performance of the method is illustrated using artificial and real test data.
Algorithmic complexity of real financial markets
NASA Astrophysics Data System (ADS)
Mansilla, R.
2001-12-01
A new approach to the understanding of complex behavior of financial markets index using tools from thermodynamics and statistical physics is developed. Physical complexity, a quantity rooted in the Kolmogorov-Chaitin theory is applied to binary sequences built up from real time series of financial markets indexes. The study is based on NASDAQ and Mexican IPC data. Different behaviors of this quantity are shown when applied to the intervals of series placed before crashes and to intervals when no financial turbulence is observed. The connection between our results and the efficient market hypothesis is discussed.
Efficient Learning Algorithms with Limited Information
ERIC Educational Resources Information Center
De, Anindya
2013-01-01
The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…
A robust fuzzy local information C-Means clustering algorithm.
Krinidis, Stelios; Chatzis, Vassilios
2010-05-01
This paper presents a variation of fuzzy c-means (FCM) algorithm that provides image clustering. The proposed algorithm incorporates the local spatial information and gray level information in a novel fuzzy way. The new algorithm is called fuzzy local information C-Means (FLICM). FLICM can overcome the disadvantages of the known fuzzy c-means algorithms and at the same time enhances the clustering performance. The major characteristic of FLICM is the use of a fuzzy local (both spatial and gray level) similarity measure, aiming to guarantee noise insensitiveness and image detail preservation. Furthermore, the proposed algorithm is fully free of the empirically adjusted parameters (a, ¿(g), ¿(s), etc.) incorporated into all other fuzzy c-means algorithms proposed in the literature. Experiments performed on synthetic and real-world images show that FLICM algorithm is effective and efficient, providing robustness to noisy images.
Center for Quantum Algorithms and Complexity
2014-05-12
condensed matter physics. Of particular importance are 1D Hamiltonians . We give a new combinatorial approach to proving the area law for 1D systems via...Dorit Aharonov, Lior Eldar. On the Complexity of Commuting Local Hamiltonians , and Tight Conditions for Topological Order in Such Systems , 2011...natural setting to explore this question. These systems are generally described by a local Hamiltonian that models interactions between neighboring
Information dynamics algorithm for detecting communities in networks
NASA Astrophysics Data System (ADS)
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
2012-11-01
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
Distributed learning automata-based algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-03-01
Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.
NASA Astrophysics Data System (ADS)
Zhang, Xian-Kun; Tian, Xue; Li, Ya-Nan; Song, Chen
2014-08-01
The label propagation algorithm (LPA) is a graph-based semi-supervised learning algorithm, which can predict the information of unlabeled nodes by a few of labeled nodes. It is a community detection method in the field of complex networks. This algorithm is easy to implement with low complexity and the effect is remarkable. It is widely applied in various fields. However, the randomness of the label propagation leads to the poor robustness of the algorithm, and the classification result is unstable. This paper proposes a LPA based on edge clustering coefficient. The node in the network selects a neighbor node whose edge clustering coefficient is the highest to update the label of node rather than a random neighbor node, so that we can effectively restrain the random spread of the label. The experimental results show that the LPA based on edge clustering coefficient has made improvement in the stability and accuracy of the algorithm.
Petri net model for analysis of concurrently processed complex algorithms
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1986-01-01
This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.
NASA Astrophysics Data System (ADS)
Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng
2016-02-01
The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimization problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.
Complexity measurement based on information theory and kolmogorov complexity.
Lui, Leong Ting; Terrazas, Germán; Zenil, Hector; Alexander, Cameron; Krasnogor, Natalio
2015-01-01
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.
Information content of ozone retrieval algorithms
NASA Technical Reports Server (NTRS)
Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.
1989-01-01
The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.
Data bank homology search algorithm with linear computation complexity.
Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A
1994-06-01
A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given.
Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding
Liu, Pengyu; Jia, Kebin
2013-01-01
A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495
Sparsity-Aware Sphere Decoding: Algorithms and Complexity Analysis
NASA Astrophysics Data System (ADS)
Barik, Somsubhra; Vikalo, Haris
2014-05-01
Integer least-squares problems, concerned with solving a system of equations where the components of the unknown vector are integer-valued, arise in a wide range of applications. In many scenarios the unknown vector is sparse, i.e., a large fraction of its entries are zero. Examples include applications in wireless communications, digital fingerprinting, and array-comparative genomic hybridization systems. Sphere decoding, commonly used for solving integer least-squares problems, can utilize the knowledge about sparsity of the unknown vector to perform computationally efficient search for the solution. In this paper, we formulate and analyze the sparsity-aware sphere decoding algorithm that imposes $\\ell_0$-norm constraint on the admissible solution. Analytical expressions for the expected complexity of the algorithm for alphabets typical of sparse channel estimation and source allocation applications are derived and validated through extensive simulations. The results demonstrate superior performance and speed of sparsity-aware sphere decoder compared to the conventional sparsity-unaware sphere decoding algorithm. Moreover, variance of the complexity of the sparsity-aware sphere decoding algorithm for binary alphabets is derived. The search space of the proposed algorithm can be further reduced by imposing lower bounds on the value of the objective function. The algorithm is modified to allow for such a lower bounding technique and simulations illustrating efficacy of the method are presented. Performance of the algorithm is demonstrated in an application to sparse channel estimation, where it is shown that sparsity-aware sphere decoder performs close to theoretical lower limits.
Estimation of Information Hiding Algorithms and Parameters
2007-02-21
growing false positives. 15. SUBJECT TERMS Information hiding, reverse-engineering, steganography , steganalysis, watermarking 16. SECURITY...specialist in breaking a covert communication system given very little information. Since it is likely for steganography to be used on very large...multimedia files, e.g. audio and video, there are substantial issues to be addressed on the implementation end of such a system as well as the theoretical
Securing Information with Complex Optical Encryption Networks
2015-08-11
encryption networks, and to provide effective and reliable solutions for information security. 15. SUBJECT TERMS Optical Encryption...popularization of networking and internet , much research effort is made in the field of information security. Military communication system makes an...objective is to propose the architectures for a number of complex optical encryption networks so as to provide effective and reliable solutions for
Recording information on protein complexes in an information management system
Savitsky, Marc; Diprose, Jonathan M.; Morris, Chris; Griffiths, Susanne L.; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S.; Blake, Richard; Stuart, David I.; Esnouf, Robert M.
2011-01-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein–protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682
Recording information on protein complexes in an information management system.
Savitsky, Marc; Diprose, Jonathan M; Morris, Chris; Griffiths, Susanne L; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S; Blake, Richard; Stuart, David I; Esnouf, Robert M
2011-08-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein-protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described.
A region growing vessel segmentation algorithm based on spectrum information.
Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo
2013-01-01
We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.
Integrating a priori information in edge-linking algorithms
NASA Astrophysics Data System (ADS)
Farag, Aly A.; Cao, Yu; Yeap, Yuen-Pin
1992-09-01
This research presents an approach to integrate a priori information to the path metric of the LINK algorithm. The zero-crossing contours of the $DEL2G are taken as a gross estimate of the boundaries in the image. This estimate of the boundaries is used to define the swath of important information, and to provide a distance measure for edge localization. During the linking process, a priori information plays important roles in (1) dramatically reducing the search space because the actual path lies within +/- 2 (sigma) f from the prototype contours ((sigma) f is the standard deviation of the Gaussian kernel used in the edge enhancement step); (2) breaking the ties when the search metrics give uncertain information; and (3) selecting the set of goal nodes for the search algorithm. We show that the integration of a priori information in the LINK algorithms provides faster and more accurate edge linking.
A Modified Tactile Brush Algorithm for Complex Touch Gestures
Ragan, Eric
2015-01-01
Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.
Dynamic information routing in complex networks
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-01-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function. PMID:27067257
Dynamic information routing in complex networks
NASA Astrophysics Data System (ADS)
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2016-04-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how such systems may specifically communicate and dynamically route information is not well understood. Here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks. Switching between collective dynamics induces flexible reorganization of information sharing and routing patterns, as quantified by delayed mutual information and transfer entropy measures between activities of a network's units. We demonstrate the power of this mechanism specifically for oscillatory dynamics and analyse how individual unit properties, the network topology and external inputs co-act to systematically organize information routing. For multi-scale, modular architectures, we resolve routing patterns at all levels. Interestingly, local interventions within one sub-network may remotely determine nonlocal network-wide communication. These results help understanding and designing information routing patterns across systems where collective dynamics co-occurs with a communication function.
Brain Maturation Changes Characterized by Algorithmic Complexity (Lempel and Ziv Complexity)
NASA Astrophysics Data System (ADS)
Fernández, J. G.; Larrondo, H. A.; Figliola, A.; Serrano, E.; Rostas, J. A. P.; Hunter, M.; Rosso, O. A.
2007-05-01
Recent experimental results suggest that basal electroencephalogram (EEG)changes reflect the widespread functional evolution in neuronal circuits, occurring in chicken brain during the "synapse maturation" period, between 3 and 8 weeks' posthatch. In present work a quantitative analysis based on the Algorithmic Complexity (Lempel and Ziv Complexity) is performed. It is shown that this complexity presents a peak at week 2 posthatch 2, and a tendency to stabilize its values after the week 5 posthatch.
Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.
ERIC Educational Resources Information Center
Petry, Frederick E.; And Others
1993-01-01
Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…
Presentation Media, Information Complexity, and Learning Outcomes
ERIC Educational Resources Information Center
Andres, Hayward P.; Petersen, Candice
2002-01-01
Cognitive processing limitations restrict the number of complex information items held and processed in human working memory. To overcome such limitations, a verbal working memory channel is used to construct an if-then proposition representation of facts and a visual working memory channel is used to construct a visual imagery of geometric…
On the convergence of the Fitness-Complexity algorithm
NASA Astrophysics Data System (ADS)
Pugliese, Emanuele; Zaccaria, Andrea; Pietronero, Luciano
2016-10-01
We investigate the convergence properties of an algorithm which has been recently proposed to measure the competitiveness of countries and the quality of their exported products. These quantities are called respectively Fitness F and Complexity Q. The algorithm was originally based on the adjacency matrix M of the bipartite network connecting countries with the products they export, but can be applied to any bipartite network. The structure of the adjacency matrix turns to be essential to determine which countries and products converge to non zero values of F and Q. Also the speed of convergence to zero depends on the matrix structure. A major role is played by the shape of the ordered matrix and, in particular, only those matrices whose diagonal does not cross the empty part are guaranteed to have non zero values as outputs when the algorithm reaches the fixed point. We prove this result analytically for simplified structures of the matrix, and numerically for real cases. Finally, we propose some practical indications to take into account our results when the algorithm is applied.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Constant-complexity stochastic simulation algorithm with optimal binning.
Sanft, Kevin R; Othmer, Hans G
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Imaging for dismantlement verification: information management and analysis algorithms
Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.
2010-09-01
The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.
Biological Information as Set-Based Complexity
Galas, David J.; Nykter, Matti; Carter, Gregory W.; Price, Nathan D.; Shmulevich, Ilya
2010-01-01
Summary It is not obvious what fraction of all the potential information residing in the molecules and structures of living systems is significant or meaningful to the system. Sets of random sequences or identically repeated sequences, for example, would be expected to contribute little or no useful information to a cell. This issue of quantitation of information is important since the ebb and flow of biologically significant information is essential to our quantitative understanding of biological function and evolution. Motivated specifically by these problems of biological information, we propose here a class of measures to quantify the contextual nature of the information in sets of objects, based on Kolmogorov's intrinsic complexity. Such measures discount both random and redundant information and are inherent in that they do not require a defined state space to quantify the information. The maximization of this new measure, which can be formulated in terms of the universal information distance, appears to have several useful and interesting properties, some of which we illustrate with examples. PMID:27857450
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Web multimedia information retrieval using improved Bayesian algorithm.
Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong
2003-01-01
The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
Gait information flow indicates complex motor dysfunction.
Hoyer, Dirk; Kletzin, Ulf; Adler, Daniela; Adler, Steffen; Meissner, Winfried; Blickhan, Reinhard
2005-08-01
Gait-related back movements require coordination of multiple extremities including the flexible trunk. Ageing and chronic back pain influence these adjustments. These complex coordinations can advantageously be quantified by information theoretically based communication measures such as the gait information flow (GIF). Nine back pain patients (aged 61+/-10 yr) and 12 controls (aged 38+/-10 yr) were investigated during normal walking across a distance of 300 m. The back movements were measured as distances between characteristic points (cervical spine CS, thoracic spine TS, lumbar spine LS) by the sonoSens Monitor, a system for mobile motion analysis. Gait information flow and regularity indices (RI1: short prediction horizon of 100 ms, RI2: longer prediction horizon of walking period) were assessed as communication characteristics. All indices were non-parametrically tested for group differences. Sensitivity and specificity were assessed by bivariate logistic regression models. We found regularity indices systematically dependent on measurement points, information flow horizon and groups. In the patients RI1 was increased, but RI2 was decreased in comparison to the control group. These results quantitatively characterize the altered complex communication in the patients. We conclude that ageing and/or chronic back pain related dysfunctions of gait can advantageously be monitored by gait information flow characteristics of back movements measured as distances between characteristics points at the back surface.
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Informational analysis involving application of complex information system
NASA Astrophysics Data System (ADS)
Ciupak, Clébia; Vanti, Adolfo Alberto; Balloni, Antonio José; Espin, Rafael
The aim of the present research is performing an informal analysis for internal audit involving the application of complex information system based on fuzzy logic. The same has been applied in internal audit involving the integration of the accounting field into the information systems field. The technological advancements can provide improvements to the work performed by the internal audit. Thus we aim to find, in the complex information systems, priorities for the work of internal audit of a high importance Private Institution of Higher Education. The applied method is quali-quantitative, as from the definition of strategic linguistic variables it was possible to transform them into quantitative with the matrix intersection. By means of a case study, where data were collected via interview with the Administrative Pro-Rector, who takes part at the elaboration of the strategic planning of the institution, it was possible to infer analysis concerning points which must be prioritized at the internal audit work. We emphasize that the priorities were identified when processed in a system (of academic use). From the study we can conclude that, starting from these information systems, audit can identify priorities on its work program. Along with plans and strategic objectives of the enterprise, the internal auditor can define operational procedures to work in favor of the attainment of the objectives of the organization.
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
modern research overarching all of the traditional scientific disciplines. The transportation networks of planes, highways and railroads; the economic networks of global finance and stock markets; the social networks of terrorism, governments, businesses and churches; the physical networks of telephones, the Internet, earthquakes and global warming and the biological networks of gene regulation, the human body, clusters of neurons and food webs, share a number of apparently universal properties as the networks become increasingly complex. Ubiquitous aspects of such complex networks are the appearance of non-stationary and non-ergodic statistical processes and inverse power-law statistical distributions. Herein we review the traditional dynamical and phase-space methods for modeling such networks as their complexity increases and focus on the limitations of these procedures in explaining complex networks. Of course we will not be able to review the entire nascent field of network science, so we limit ourselves to a review of how certain complexity barriers have been surmounted using newly applied theoretical concepts such as aging, renewal, non-ergodic statistics and the fractional calculus. One emphasis of this review is information transport between complex networks, which requires a fundamental change in perception that we express as a transition from the familiar stochastic resonance to the new concept of complexity matching.
Research on Critical Nodes Algorithm in Social Complex Networks
NASA Astrophysics Data System (ADS)
Wang, Xue-Guang
2017-01-01
Discovering critical nodes in social networks has many important applications and has attracted more and more institutions and scholars. How to determine the K critical nodes with the most influence in a social network is a NP (define) problem. Considering the widespread community structure, this paper presents an algorithm for discovering critical nodes based on two information diffusion models and obtains each node's marginal contribution by using a Monte-Carlo method in social networks. The solution of the critical nodes problem is the K nodes with the highest marginal contributions. The feasibility and effectiveness of our method have been verified on two synthetic datasets and four real datasets.
Genetic algorithms applied to nonlinear and complex domains
Barash, Danny
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Genetic algorithms applied to nonlinear and complex domains
Barash, Danny
1999-06-01
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a final result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.
The architecture of the management system of complex steganographic information
NASA Astrophysics Data System (ADS)
Evsutin, O. O.; Meshcheryakov, R. V.; Kozlova, A. S.; Solovyev, T. M.
2017-01-01
The aim of the study is to create a wide area information system that allows one to control processes of generation, embedding, extraction, and detection of steganographic information. In this paper, the following problems are considered: the definition of the system scope and the development of its architecture. For creation of algorithmic maintenance of the system, classic methods of steganography are used to embed information. Methods of mathematical statistics and computational intelligence are used to identify the embedded information. The main result of the paper is the development of the architecture of the management system of complex steganographic information. The suggested architecture utilizes cloud technology in order to provide service using the web-service via the Internet. It is meant to provide streams of multimedia data processing that are streams with many sources of different types. The information system, built in accordance with the proposed architecture, will be used in the following areas: hidden transfer of documents protected by medical secrecy in telemedicine systems; copyright protection of online content in public networks; prevention of information leakage caused by insiders.
Binary 3D image interpolation algorithm based global information and adaptive curves fitting
NASA Astrophysics Data System (ADS)
Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng
2013-08-01
Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.
Patent information - towards simplicity or complexity?
NASA Astrophysics Data System (ADS)
Shenton, Written By Kathleen; Norton, Peter; Onodera, Translated By Natsuo
Since the advent of online services, the ability to search and find chemical patent information has improved immeasurably. Recently, integration of a multitude of files (through file merging as well as cross-file/simultaneous searches), 'intelligent' interfaces and optical technology for large amounts of data seem to achieve greater simplicity and convenience in the retrieval of patent information. In spite of these progresses, there is more essential problem which increases complexity. It is a tendency to expand indefinitely the range of claim for chemical substances by a ultra-generic description of structure (overuse of optional substituents, variable divalent groups, repeating groups, etc.) and long listing of prophetic examples. Not only does this tendency worry producers and searchers of patent databases but also prevents truly worthy inventions in future.
The guitar chord-generating algorithm based on complex network
NASA Astrophysics Data System (ADS)
Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais
2016-02-01
This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.
Hybrid binary GA-EDA algorithms for complex “black-box” optimization problems
NASA Astrophysics Data System (ADS)
Sopov, E.
2017-02-01
Genetic Algorithms (GAs) have proved their efficiency solving many complex optimization problems. GAs can be also applied for “black-box” problems, because they realize the “blind” search and do not require any specific information about features of search space and objectives. It is clear that a GA uses the “Trial-and-Error” strategy to explorer search space, and collects some statistical information that is stored in the form of genes in the population. Estimation of Distribution Algorithms (EDA) have very similar realization as GAs, but use an explicit representation of search experience in the form of the statistical probabilities distribution. In this study we discus some approaches for improving the standard GA performance by combining the binary GA with EDA. Finally, a novel approach for the large-scale global optimization is proposed. The experimental results and comparison with some well-studied techniques are presented and discussed.
Local algorithm for computing complex travel time based on the complex eikonal equation
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing
2016-04-01
The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.
Is a Complex-Valued Stepsize Advantageous in Complex-Valued Gradient Learning Algorithms?
Zhang, Huisheng; Mandic, Danilo P
2016-12-01
Complex gradient methods have been widely used in learning theory, and typically aim to optimize real-valued functions of complex variables. The stepsize of complex gradient learning methods (CGLMs) is a positive number, and little is known about how a complex stepsize would affect the learning process. To this end, we undertake a comprehensive analysis of CGLMs with a complex stepsize, including the search space, convergence properties, and the dynamics near critical points. Furthermore, several adaptive stepsizes are derived by extending the Barzilai-Borwein method to the complex domain, in order to show that the complex stepsize is superior to the corresponding real one in approximating the information in the Hessian. A numerical example is presented to support the analysis.
Combined mining: discovering informative knowledge in complex data.
Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi
2011-06-01
Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.
Approach to Complex Upper Extremity Injury: An Algorithm
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-01-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Algorithmic complexity of growth hormone release in humans
Prank, K.; Wagner, M.; Brabant, G.
1996-12-31
Most hormones are secreted in an pulsatile rather than in a constant manner. This temporal pattern of pulsatile hormone release plays an important role in the regulation of cellular function and structure. In healthy humans growth hormone (GH) secretion is characterized by distinct pulses whereas patients bearing a GH producing tumor accompanied with excessive secretion (acromegaly) exhibit a highly irregular pattern of GH release. It has been hypothesized that this highly disorderly pattern of GH release in acromegaly arises from random events in the GH-producing tumor under decreased normal control of GH secretion. Using a context-free grammar complexity measure (algorithmic complexity) in conjunction with random surrogate data sets we demonstrate that the temporal pattern of GH release in acromegaly is not significantly different from a variety of stochastic processes. In contrast, normal subjects clearly exhibit deterministic structure in their temporal patterns of GH secretion. Our results support the hypothesis that GH release in acromegaly is due to random events in the GH-producing tumorous cells which might become independent from hypothalamic regulation. 17 refs., 1 fig., 2 tabs.
Based on Multi-sensor Information Fusion Algorithm of TPMS Research
NASA Astrophysics Data System (ADS)
Yulan, Zhou; Yanhong, Zang; Yahong, Lin
In the paper are presented algorithms of TPMS (Tire Pressure Monitoring System) based on multi-sensor information fusion. A Unified mathematical models of information fusion are constructed and three algorithms are used to deal with, which include algorithm based on Bayesian, algorithm based on the relative distance (an improved algorithm of bayesian theory of evidence), algorithm based on multi-sensor weighted fusion. The calculating results shows that the algorithm based on d-s evidence theory of multisensor fusion method better than the algorithm the based on information fusion method or the bayesian method.
Optical tomographic memories: algorithms for the efficient information readout
NASA Astrophysics Data System (ADS)
Pantelic, Dejan V.
1990-07-01
Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different
Fast algorithm for computing complex number-theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Information Technology in Complex Health Services
Southon, Frank Charles Gray; Sauer, Chris; Dampney, Christopher Noel Grant (Kit)
1997-01-01
Abstract Objective: To identify impediments to the successful transfer and implementation of packaged information systems through large, divisionalized health services. Design: A case analysis of the failure of an implementation of a critical application in the Public Health System of the State of New South Wales, Australia, was carried out. This application had been proven in the United States environment. Measurements: Interviews involving over 60 staff at all levels of the service were undertaken by a team of three. The interviews were recorded and analyzed for key themes, and the results were shared and compared to enable a continuing critical assessment. Results: Two components of the transfer of the system were considered: the transfer from a different environment, and the diffusion throughout a large, divisionalized organization. The analyses were based on the Scott-Morton organizational fit framework. In relation to the first, it was found that there was a lack of fit in the business environments and strategies, organizational structures and strategy-structure pairing as well as the management process-roles pairing. The diffusion process experienced problems because of the lack of fit in the strategy-structure, strategy-structure-management processes, and strategy-structure-role relationships. Conclusion: The large-scale developments of integrated health services present great challenges to the efficient and reliable implementation of information technology, especially in large, divisionalized organizations. There is a need to take a more sophisticated approach to understanding the complexities of organizational factors than has traditionally been the case. PMID:9067877
Amirfattahi, Rassoul
2013-10-01
Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.
An object tracking algorithm with embedded gyro information
NASA Astrophysics Data System (ADS)
Zhang, Yutong; Yan, Ding; Yuan, Yating
2017-01-01
The high speed attitude maneuver of Unmanned Aerial Vehicle (UAV) always causes large motion between adjacent frames of the video stream produced from the camera fixed on the UAV body, which will severely disrupt the performance of image object tracking process. To solve this problem, this paper proposes a method that using a gyroscope fixed on the camera to measure the angular velocity of camera, and then the object position's substantial change in the video stream is predicted. We accomplished the object tracking based on template matching. Experimental result shows that the object tracking algorithm's performance is improved in its efficiency and robustness with embedded gyroscope information.
Network algorithms for information analysis using the Titan Toolkit.
McLendon, William Clarence, III; Baumes, Jeffrey; Wilson, Andrew T.; Wylie, Brian Neil; Shead, Timothy M.
2010-07-01
The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Pande, S.
2009-12-01
Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure
Complex algorithm of optical flow determination by weighted full search
NASA Astrophysics Data System (ADS)
Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.
2016-11-01
An optical flow determination algorithm is proposed, developed and tested in the article. The algorithm is aimed at improving the accuracy of displacement determination at the scene element boundaries (objects). The results show that the application of the proposed algorithm is rather promising for stereo vision applications. Variations in calculating parameters have allowed determining their rational values and reducing the average absolute error of the end point displacement determination (AEE). The peculiarity of the proposed algorithm is performing calculations within the local regions, which makes it possible to carry out such calculations simultaneously (to attract parallel calculations).
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
FIPSDock: a new molecular docking technique driven by fully informed swarm optimization algorithm.
Liu, Yu; Zhao, Lei; Li, Wentao; Zhao, Dongyu; Song, Miao; Yang, Yongliang
2013-01-05
The accurate prediction of protein-ligand binding is of great importance for rational drug design. We present herein a novel docking algorithm called as FIPSDock, which implements a variant of the Fully Informed Particle Swarm (FIPS) optimization method and adopts the newly developed energy function of AutoDock 4.20 suite for solving flexible protein-ligand docking problems. The search ability and docking accuracy of FIPSDock were first evaluated by multiple cognate docking experiments. In a benchmarking test for 77 protein/ligand complex structures derived from GOLD benchmark set, FIPSDock has obtained a successful predicting rate of 93.5% and outperformed a few docking programs including particle swarm optimization (PSO)@AutoDock, SODOCK, AutoDock, DOCK, Glide, GOLD, FlexX, Surflex, and MolDock. More importantly, FIPSDock was evaluated against PSO@AutoDock, SODOCK, and AutoDock 4.20 suite by cross-docking experiments of 74 protein-ligand complexes among eight protein targets (CDK2, ESR1, F2, MAPK14, MMP8, MMP13, PDE4B, and PDE5A) derived from Sutherland-crossdock-set. Remarkably, FIPSDock is superior to PSO@AutoDock, SODOCK, and AutoDock in seven out of eight cross-docking experiments. The results reveal that FIPS algorithm might be more suitable than the conventional genetic algorithm-based algorithms in dealing with highly flexible docking problems.
Multidomain solution algorithm for potential flow computations around complex configurations
NASA Astrophysics Data System (ADS)
Jacquotte, Olivier-Pierre; Godard, Jean-Luc
1994-04-01
A method is presented for the computation of irrotational transonic flows of perfect gas around a wide class of geometries. It is based on the construction of a multidomain structured grid and then on the solution of the full potential equation discretized with finite elements. The novelty of the paper is the combination of three embedded algorithms: a mixed fixed-point/Newton algorithm to treat the non-linearity, a multidomain conjugate gradient algorithm to handle the grid topology and another conjugate gradient algorithm in each of the structured domains. This method has made possible the calculations of flows around geometries that cannot be treated in a structured approach without the multidomain algorithm; an application of this method to the study of the wing-pylon-nacelle interactions is presented.
Low-complexity color demosaicing algorithm based on integrated gradients
NASA Astrophysics Data System (ADS)
Chung, King-Hong; Chan, Yuk-Hee
2010-04-01
Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. In this work, an efficient decision-based demosaicing method is presented. This method exploits a new edge-sensing measure called integrated gradient (IG) to effectively extract gradient information in both color intensity and color difference domains simultaneously. This measure is reliable and supports full resolution, which allows one to interpolate the missing samples along an appropriate direction and hence directly improves the demosaicing performance. By sharing it in different demosaicing stages to guide the interpolation of various color planes, it guarantees the consistency of the interpolation direction in different color channels and saves the effort required to repeatedly extract gradient information from intermediate interpolation results at different stages. An IG-based green plane enhancement is also proposed to further improve the method's efficiency. Simulation results confirm that the proposed demosaicing method outperforms up-to-date demosaicing methods in terms of output quality at a complexity of around 80 arithmetic operations per pixel.
Scalability, Complexity and Reliability in Quantum Information Processing
2007-03-01
Information and Quantum Computation, Abdus Salam International Centre for Theoretical Physics, Trieste, Italy, “Quantum algorithm for the hidden shift...Future (and Past) of Quantum Lower Bounds by Polynomials,” October 17, 2002 W. van Dam, Workshop on Quantum Information and Quantum Computation, Abdus ... Salam International Centre for Theoretical Physics, Trieste, Italy, “Quantum algorithms: Fourier transforms and group theory,” October 21, 2002 K
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
The physics of complex systems in information and biology
NASA Astrophysics Data System (ADS)
Walker, Dylan
Citation networks have re-emerged as a topic intense interest in the complex networks community with the recent availability of large-scale data sets. The ranking of citation networks is a necessary practice as a means to improve information navigability and search. Unlike many information networks, the aging characteristics of citation networks require the development of new ranking methods. To account for strong aging characteristics of citation networks, we modify the PageRank algorithm by initially distributing random surfers exponentially with age, in favor of more recent publications. The output of this algorithm, which we call CiteRank, is interpreted as approximate traffic to individual publications in a simple model of how researchers find new information. We optimize parameters of our algorithm to achieve the best performance. The results are compared for two rather different citation networks: all American Physical Society publications between 1893-2003 and the set of high-energy physics theory (hep-th) preprints. Despite major differences between these two networks, we find that their optimal parameters for the CiteRank algorithm are remarkably similar. The advantages and performance of CiteRank over more conventional methods of ranking publications are discussed. Collaborative voting systems have emerged as an abundant form of real-world, complex information systems that exist in a variety of online applications. These systems are comprised of large populations of users that collectively submit and vote on objects. While the specific properties of these systems vary widely, many of them share a core set of features and dynamical behaviors that govern their evolution. We study a subset of these systems that involve material of a time-critical nature as in the popular example of news items. We consider a general model system in which articles are introduced, voted on by a population of users, and subsequently expire after a proscribed period of time. To
An information-bearing seed for nucleating algorithmic self-assembly.
Barish, Robert D; Schulman, Rebecca; Rothemund, Paul W K; Winfree, Erik
2009-04-14
Self-assembly creates natural mineral, chemical, and biological structures of great complexity. Often, the same starting materials have the potential to form an infinite variety of distinct structures; information in a seed molecule can determine which form is grown as well as where and when. These phenomena can be exploited to program the growth of complex supramolecular structures, as demonstrated by the algorithmic self-assembly of DNA tiles. However, the lack of effective seeds has limited the reliability and yield of algorithmic crystals. Here, we present a programmable DNA origami seed that can display up to 32 distinct binding sites and demonstrate the use of seeds to nucleate three types of algorithmic crystals. In the simplest case, the starting materials are a set of tiles that can form crystalline ribbons of any width; the seed directs assembly of a chosen width with >90% yield. Increased structural diversity is obtained by using tiles that copy a binary string from layer to layer; the seed specifies the initial string and triggers growth under near-optimal conditions where the bit copying error rate is <0.2%. Increased structural complexity is achieved by using tiles that generate a binary counting pattern; the seed specifies the initial value for the counter. Self-assembly proceeds in a one-pot annealing reaction involving up to 300 DNA strands containing >17 kb of sequence information. In sum, this work demonstrates how DNA origami seeds enable the easy, high-yield, low-error-rate growth of algorithmic crystals as a route toward programmable bottom-up fabrication.
Dynamic information routing in complex networks
NASA Astrophysics Data System (ADS)
Kirst, Christoph; Timme, Marc; Battaglia, Demian
2015-03-01
Flexible information routing fundamentally underlies the function of many biological and artificial networks. Yet, how information may be specifically communicated and dynamically routed in these systems is not well understood. Here we demonstrate that collective dynamical states systematically control patterns of information sharing and transfer in networks, as measured by delayed mutual information and transfer entropies between activities of a network's units. For oscillatory networks we analyze how individual unit properties, the connectivity structure and external inputs all provide means to flexibly control information routing. For multi-scale, modular architectures, we resolve communication patterns at all levels and show how local interventions within one sub-network may remotely control the non-local network-wide routing of information. This theory helps understanding information routing patterns across systems where collective dynamics co-occurs with a communication function.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks.
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-27
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-01-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model. PMID:28240238
Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.
Khennak, Ilyes; Drias, Habiba
2017-02-01
With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.
On Distribution Reduction and Algorithm Implementation in Inconsistent Ordered Information Systems
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems. PMID:25258721
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network.
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-03-08
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
Determination of multifractal dimensions of complex networks by means of the sandbox algorithm
NASA Astrophysics Data System (ADS)
Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo
2015-02-01
Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.
Fast registration algorithm using a variational principle for mutual information
NASA Astrophysics Data System (ADS)
Alexander, Murray E.; Summers, Randy
2003-05-01
A method is proposed for cross-modal image registration based on mutual information (MI) matching criteria. Both conventional and "normalized" MI are considered. MI may be expressed as a functional of a general image displacement field u. The variational principle for MI provides a field equation for u. The method employs a set of "registration points" consisting of a prescribed number of strongest edge points of the reference image, and minimizes an objective function D defined as the sum of the square residuals of the field equation for u at these points, where u is expressed as a sum over a set of basis functions (the affine model is presented here). D has a global minimum when the images are aligned, with a "basin of attraction" typically of width ~0.3 pixels. By pre-filtering with a low-pass filter, and using a multiresolution image pyramid, the basin may be significantly widened. The Levenberg-Marquardt algorithm is used to minimize D. Tests using randomly distributed misalignments of image pairs show that registration accuracy of 0.02 - 0.07 pixels is achieved, when using cubic B-splines for image representation, interpolation, and Parzen window estimation.
Uses of Color in Complex Information Displays
1985-02-01
Information Service database , the Lockheed Dialog System and the Defense File. The principal areas searched were: color perception, color coding, visual...computerized databases were also ident This search was multidisciplinary, covering relevant resea computer graphics, display technologies, human...saturated ones. This is true for all wavelengths, except spectral yellow (Chapanis & Halsey, 1955). While the scientific database remains incomplete
A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks
Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan
2015-01-01
Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time. PMID:26230697
Information Access in Complex, Poorly Structured Information Spaces
1990-02-01
distributed and made available through News creates a serious information overload. The conceptual framework behind this research effort explores (a...is willing to generate it, whose structure is it?). The innovative system building effort (instantiating the conceptual framework as well as
NASA Astrophysics Data System (ADS)
Chen, Lei; Li, Dehua; Yang, Jie
2007-12-01
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.
An Introduction to Genetic Algorithms and to Their Use in Information Retrieval.
ERIC Educational Resources Information Center
Jones, Gareth; And Others
1994-01-01
Genetic algorithms, a class of nondeterministic algorithms in which the role of chance makes the precise nature of a solution impossible to guarantee, seem to be well suited to combinatorial-optimization problems in information retrieval. Provides an introduction to techniques and characteristics of genetic algorithms and illustrates their…
NASA Astrophysics Data System (ADS)
Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi
2016-09-01
The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising.
NASA Astrophysics Data System (ADS)
Buiochi, F.; Kiyono, C. Y.; Peréz, N.; Adamowski, J. C.; Silva, E. C. N.
A new systematic and efficient algorithm to obtain the ten complex constants of piezoelectric materials belonging to the 6 mm symmetry class was developed. A finite element method routine was implemented in Matlab using eight-node axisymmetric elements. The algorithm raises the electrical conductance and resistance curves and calculates the quadratic difference between the experimental and numerical curves. Finally, to minimize the difference, an optimization algorithm based on the "Method of Moving Asymptotes" (MMA) is used. The algorithm is able to adjust the curves over a wide frequency range obtaining the real and imaginary parts of the material properties simultaneously.
Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi
2016-01-01
The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156
Convergence analysis of an augmented algorithm for fully complex-valued neural networks.
Xu, Dongpo; Zhang, Huisheng; Mandic, Danilo P
2015-09-01
This paper presents an augmented algorithm for fully complex-valued neural network based on Wirtinger calculus, which simplifies the derivation of the algorithm and eliminates the Schwarz symmetry restriction on the activation functions. A unified mean value theorem is first established for general functions of complex variables, covering the analytic functions, non-analytic functions and real-valued functions. Based on so introduced theorem, convergence results of the augmented algorithm are obtained under mild conditions. Simulations are provided to support the analysis.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Interactive Computational Algorithms for Acoustic Simulation in Complex Environments
2015-07-19
simualtion for urban and other complex propagation environments. The PIs will also collaborate with Stephen Ketcham and Keith Wilson at USACE and...Albert, Keith Wilson, Dinesh Manocha. Validation of 3D numerical simulation for acoustic pulse propagation in an urban environment, The Journal of
Galas, David J; Sakhanenko, Nikita A; Skupin, Alexander; Ignac, Tomasz
2014-02-01
Context dependence is central to the description of complexity. Keying on the pairwise definition of "set complexity," we use an information theory approach to formulate general measures of systems complexity. We examine the properties of multivariable dependency starting with the concept of interaction information. We then present a new measure for unbiased detection of multivariable dependency, "differential interaction information." This quantity for two variables reduces to the pairwise "set complexity" previously proposed as a context-dependent measure of information in biological systems. We generalize it here to an arbitrary number of variables. Critical limiting properties of the "differential interaction information" are key to the generalization. This measure extends previous ideas about biological information and provides a more sophisticated basis for the study of complexity. The properties of "differential interaction information" also suggest new approaches to data analysis. Given a data set of system measurements, differential interaction information can provide a measure of collective dependence, which can be represented in hypergraphs describing complex system interaction patterns. We investigate this kind of analysis using simulated data sets. The conjoining of a generalized set complexity measure, multivariable dependency analysis, and hypergraphs is our central result. While our focus is on complex biological systems, our results are applicable to any complex system.
Tahara, Tatsuki; Shimozato, Yuki; Xia, Peng; Ito, Yasunori; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu; Kubota, Toshihiro
2012-08-27
We propose an image-reconstruction algorithm of parallel phase-shifting digital holography (PPSDH) which is a technique of single-shot phase-shifting interferometry. In the conventional algorithms in PPSDH, the residual 0th-order diffraction wave and the conjugate images cannot be removed completely and a part of space-bandwidth information is discarded. The proposed algorithm can remove these residual images by modifying the calculation of phase-shifting interferometry and by using Fourier transform technique, respectively. Then, several types of complex amplitudes are derived from a recorded hologram according to the directions in which the neighboring pixels used for carrying out the spatial phase-shifting interferometry are aligned. Several distributions are Fourier-transformed and wide space-bandwidth information of the object wave is obtained by selecting the spectrum among the Fourier-transformed images in each region of the spatial frequency domain and synthesizing a Fourier-transformed image from the spectrum.
NASA Astrophysics Data System (ADS)
Sahu, Swagatika; Mohanty, Saumendra; Srivastav, Richa
2013-01-01
Orthogonal Frequency Division Multiplexing (OFDM) is an emerging multi-carrier modulation scheme, which has been adopted for several wireless standards such as IEEE 802.11a and HiperLAN2, etc. A well-known problem of OFDM is its sensitivity to frequency offset between the transmitted and received carrier frequencies. In (OFDM) system Carrier frequency offsets (CFOs) between the transmitter and the receiver destroy the orthogonality between carriers and degrade the system performance significantly. The main problem with frequency offset is that it introduces interference among the multiplicity of carriers in the OFDM signal.The conventional algorithms given by P. Moose and Schmidl describes how carrier frequency offset of an OFDM system can be estimated using training sequences. Simulation results show that the improved carrier frequency offset estimation algorithm which uses a complex training sequence for frequency offset estimation, performs better than conventional P. Moose and Schmidl algorithm, which can effectively improve the frequency estimation accuracy and provides a wide acquisition range for the carrier frequency offset with low complexity. This paper introduces the BER comparisons of different algorithms with the Improved Algorithms for different Real and Complex modulations schemes, considering random carrier offsets . This paper also introduces the BER performances with different CFOs for different Real and Complex modulation schemes for the Improved algorithm.
Cognitive Complexity as a Determinant of Information Processing
ERIC Educational Resources Information Center
Stewin, L.; Anderson, C. C.
1974-01-01
Relationships between cognitive complexity as defined by the ITI [Interpersonal Topical Inventory (Tuckman, 1966) and the CST [Conceptual Systems Test (Harvey, 1967)] and a number of other information processing variables were examined using 107 grade eleven students. (Editor)
Dai, James Y; Leblanc, Michael; Smith, Nicholas L; Psaty, Bruce; Kooperberg, Charles
2009-10-01
Association studies have been widely used to identify genetic liability variants for complex diseases. While scanning the chromosomal region 1 single nucleotide polymorphism (SNP) at a time may not fully explore linkage disequilibrium, haplotype analyses tend to require a fairly large number of parameters, thus potentially losing power. Clustering algorithms, such as the cladistic approach, have been proposed to reduce the dimensionality, yet they have important limitations. We propose a SNP-Haplotype Adaptive REgression (SHARE) algorithm that seeks the most informative set of SNPs for genetic association in a targeted candidate region by growing and shrinking haplotypes with 1 more or less SNP in a stepwise fashion, and comparing prediction errors of different models via cross-validation. Depending on the evolutionary history of the disease mutations and the markers, this set may contain a single SNP or several SNPs that lay a foundation for haplotype analyses. Haplotype phase ambiguity is effectively accounted for by treating haplotype reconstruction as a part of the learning procedure. Simulations and a data application show that our method has improved power over existing methodologies and that the results are informative in the search for disease-causal loci.
NASA Astrophysics Data System (ADS)
Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir
2016-10-01
In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.
On the complexity and the information content of cosmic structures
NASA Astrophysics Data System (ADS)
Vazza, F.
2017-03-01
The emergence of cosmic structure is commonly considered one of the most complex phenomena in nature. However, this complexity has never been defined nor measured in a quantitative and objective way. In this work, we propose a method to measure the information content of cosmic structure and to quantify the complexity that emerges from it, based on Information Theory. The emergence of complex evolutionary patterns is studied with a statistical symbolic analysis of the datastream produced by state-of-the-art cosmological simulations of forming galaxy clusters. This powerful approach allows us to measure how many bits of information is necessary to predict the evolution of energy fields in a statistical way, and it offers a simple way to quantify when, where and how the cosmic gas behaves in complex ways. The most complex behaviours are found in the peripheral regions of galaxy clusters, where supersonic flows drive shocks and large energy fluctuations over a few tens of million years. Describing the evolution of magnetic energy requires at least twice as large amount of bits as required for the other energy fields. When radiative cooling and feedback from galaxy formation are considered, the cosmic gas is overall found to double its degree of complexity. In the future, Cosmic Information Theory can significantly increase our understanding of the emergence of cosmic structure as it represents an innovative framework to design and analyse complex simulations of the Universe in a simple, yet powerful way.
An augmented extended Kalman filter algorithm for complex-valued recurrent neural networks.
Goh, Su Lee; Mandic, Danilo P
2007-04-01
An augmented complex-valued extended Kalman filter (ACEKF) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. This is achieved based on some recent developments in the so-called augmented complex statistics and the use of general fully complex nonlinear activation functions within the neurons. This makes the ACEKF suitable for processing general complex-valued nonlinear and nonstationary signals and also bivariate signals with strong component correlations. Simulations on benchmark and real-world complex-valued signals support the approach.
Zaneveld, Jesse R. R.; Thurber, Rebecca L. V.
2014-01-01
Complex symbioses between animal or plant hosts and their associated microbiotas can involve thousands of species and millions of genes. Because of the number of interacting partners, it is often impractical to study all organisms or genes in these host-microbe symbioses individually. Yet new phylogenetic predictive methods can use the wealth of accumulated data on diverse model organisms to make inferences into the properties of less well-studied species and gene families. Predictive functional profiling methods use evolutionary models based on the properties of studied relatives to put bounds on the likely characteristics of an organism or gene that has not yet been studied in detail. These techniques have been applied to predict diverse features of host-associated microbial communities ranging from the enzymatic function of uncharacterized genes to the gene content of uncultured microorganisms. We consider these phylogenetically informed predictive techniques from disparate fields as examples of a general class of algorithms for Hidden State Prediction (HSP), and argue that HSP methods have broad value in predicting organismal traits in a variety of contexts, including the study of complex host-microbe symbioses. PMID:25202302
The lower bound on complexity of parallel branch-and-bound algorithm for subset sum problem
NASA Astrophysics Data System (ADS)
Kolpakov, Roman; Posypkin, Mikhail
2016-10-01
The subset sum problem is a particular case of the Boolean knapsack problem where each item has the price equal to its weight. This problem can be informally stated as searching for most dense packing of a set of items into a box with limited capacity. Recently, coarse-grain parallelization approaches to Branch-and-Bound (B&B) method attracted some attention due to the growing popularity of weakly-connected distributed computing platforms. In this paper we consider one of such approaches for solving the subset sum problem. One of the processors (manager) performs some number of B&B steps on the first stage with generating some subproblems. On the second stage, the generated subproblems are sent to other processors, one subproblem per processor. The processors solve completely the received subproblems, the manager collects all the obtained solutions and chooses the optimal one. For this algorithm we formally define the parallel execution model (frontal scheme of parallelization) and the notion of the frontal scheme complexity. We study the frontal scheme complexity for a series of subset sum problems.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Fast Multiscale Algorithms for Information Representation and Fusion
2012-07-01
4 5.1 Experiment: LIDAR Dataset (MSVD using nearest neighbors...implementation of the new multiscale SVD (MSVD) algorithms. We applied the MSVD to a publicly available LIDAR dataset for the purposes of distinguishing...between vegetation and the forest floor. The final results are presented in this report (initial results were reported in the previous quarterly report
Measurement of Information-Based Complexity in Listening.
ERIC Educational Resources Information Center
Bishop, Walton B.
When people say that what they hear is "over their heads," they are describing a severe information-based complexity (I-BC) problem. They cannot understand what is said because some of the information needed is missing, contaminated, and/or costly to obtain. Students often face these I-BC problems, and teachers often exacerbate them. Yet…
Overcoming the Superprincipal Complex: Shared and Informed Decision Making.
ERIC Educational Resources Information Center
Chamley, John D.; And Others
1992-01-01
To overcome the superprincipal complex, principals must become expert in processing information and making decisions. To make informed decisions most effectively, principals should employ participatory management, become process consultants, and incorporate the Situation-Target-Proposal (STP) method for resolving problems. Otherwise, change will…
An improved label propagation algorithm using average node energy in complex networks
NASA Astrophysics Data System (ADS)
Peng, Hao; Zhao, Dandan; Li, Lin; Lu, Jianfeng; Han, Jianmin; Wu, Songyang
2016-10-01
Detecting overlapping community structure can give a significant insight into structural and functional properties in complex networks. In this Letter, we propose an improved label propagation algorithm (LPA) to uncover overlapping community structure. After mapping nodes into random variables, the algorithm calculates variance of each node and the proposed average node energy. The nodes whose variances are less than a tunable threshold are regarded as bridge nodes and meanwhile changing the given threshold can uncover some latent bridge node. Simulation results in real-world and artificial networks show that the improved algorithm is efficient in revealing overlapping community structures.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
NASA Astrophysics Data System (ADS)
Yu, Fahong; Li, Wenping; He, Feng; Yu, Bolin; Xia, Xiaoyun; Ma, Longhua
2016-12-01
It is important to discover the potential community structure for analyzing complex networks. In this paper, an estimation of distribution algorithm with local sampling strategy for community detection in complex networks is presented to optimize the modularity density function. In the proposed algorithm, the evolution probability model is built according to eminent individuals selected by simulated annealing mechanism and a local sampling strategy based on a local similarity model is adopted to improve both the speed and the accuracy for detecting community structure in complex networks. At the same time, a more general version of the criterion function with a tunable parameter λ is used to avoid the resolution limit. Experiments on synthetic and real-life networks demonstrate the performance and the comparison of experimental results with those of several state-of-the-art methods, the proposed algorithm is considerably efficient and competitive.
A novel approach to characterize information radiation in complex networks
NASA Astrophysics Data System (ADS)
Wang, Xiaoyang; Wang, Ying; Zhu, Lin; Li, Chao
2016-06-01
The traditional research of information dissemination is mostly based on the virus spreading model that the information is being spread by probability, which does not match very well to the reality, because the information that we receive is always more or less than what was sent. In order to quantitatively describe variations in the amount of information during the spreading process, this article proposes a safety information radiation model on the basis of communication theory, combining with relevant theories of complex networks. This model comprehensively considers the various influence factors when safety information radiates in the network, and introduces some concepts from the communication theory perspective, such as the radiation gain function, receiving gain function, information retaining capacity and information second reception capacity, to describe the safety information radiation process between nodes and dynamically investigate the states of network nodes. On a micro level, this article analyzes the influence of various initial conditions and parameters on safety information radiation through the new model simulation. The simulation reveals that this novel approach can reflect the variation of safety information quantity of each node in the complex network, and the scale-free network has better "radiation explosive power", while the small-world network has better "radiation staying power". The results also show that it is efficient to improve the overall performance of network security by selecting nodes with high degrees as the information source, refining and simplifying the information, increasing the information second reception capacity and decreasing the noises. In a word, this article lays the foundation for further research on the interactions of information and energy between internal components within complex systems.
Hardware-software complex of informing passengers of forecasted route transport arrival at stop
NASA Astrophysics Data System (ADS)
Pogrebnoy, V. Yu; Pushkarev, M. I.; Fadeev, A. S.
2017-02-01
The paper presents the hardware-software complex of informing the passengers of the forecasted route transport arrival. A client-server architecture of the forecasting information system is represented and an electronic information board prototype is described. The scheme of information transfer and processing, starting with receiving navigating telemetric data from a transport vehicle and up to the time of passenger public transport arrival at the stop, as well as representation of the information on the electronic board is illustrated and described. Methods and algorithms of determination of the transport vehicle current location in the city route network are considered in detail. The description of the proposed forecasting model of transport vehicle arrival time at the stop is given. The obtained result is applied in Tomsk for forecasting and displaying the arrival time information at the stops.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2016-02-01
To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
Information Center Complex publications and presentations, 1971-1980
Gill, A.B.; Hawthorne, S.W.
1981-08-01
This indexed bibliography lists publications and presentations of the Information Center Complex, Information Division, Oak Ridge National Laboratory, from 1971 through 1980. The 659 entries cover such topics as toxicology, air and water pollution, management and transportation of hazardous wastes, energy resources and conservation, and information science. Publications range in length from 1 page to 3502 pages and include topical reports, books, journal articles, fact sheets, and newsletters. Author, title, and group indexes are provided. Annual updates are planned.
Information Center Complex publications and presentations, 1971-1982
Hawthorne, S.W.; Johnson, A.B.
1984-02-01
This indexed bibliography lists publications and presentations of the staff of the Information Center Complex, Information Division, Oak Ridge National Laboratory, from 1971 through 1982. Entries cover such topics as toxicology, air and water pollution, management and transportation of hazardous wastes, energy resources and conservation, and information science. Publications range in length from 1 page to nearly 4000 pages and include topical reports, books, journal articles, fact sheets, and newsletters. Author, title, and group indexes are provided. Annual supplements are planned.
A Survey of Stemming Algorithms in Information Retrieval
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime
2014-01-01
Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…
Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.
Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S
2013-01-01
The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Research on Quantum Algorithms at the Institute for Quantum Information
2009-10-17
developed earlier by Aliferis, Gottesman, and Preskill to encompass leakage-reduction units, such as those based on quantum teleportation . They also...NUMBER QA - Research on Quantum Algorithms at the Institute for W91INF-05-I-0294 Quantum lnfonnation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...ABSTRACT The central goals ofour project are (I) to bring large-scale quantum computers closer to realization by proposing and analyzing new schemes for
Using multiple perspectives to suppress information and complexity
Kelsey, R.L. |; Webster, R.B.; Hartley, R.T.
1998-09-01
Dissemination of battlespace information involves getting information to particular warfighters that is both useful and in a form that facilitates the tasks of those particular warfighters. There are two issues which motivate this problem of dissemination. The first issue deals with disseminating pertinent information to a particular warfighter. This can be thought of as information suppression. The second issue deals with facilitating the use of the information by tailoring the computer interface to the specific tasks of an individual warfighter. This can be thought of as interface complexity suppression. This paper presents a framework for suppressing information using an object-based knowledge representation methodology. This methodology has the ability to represent knowledge and information in multiple perspectives. Information can be suppressed by creating a perspective specific to an individual warfighter. In this way, only the information pertinent and useful to a warfighter is made available to that warfighter. Information is not removed, lost, or changed, but spread among multiple perspectives. Interface complexity is managed in a similar manner. Rather than have one generalized computer interface to access all information, the computer interface can be divided into interface elements. Interface elements can then be selected and arranged into a perspective-specific interface. This is done in a manner to facilitate completion of tasks contained in that perspective. A basic battlespace domain containing ground and air elements and associated warfighters is used to exercise the methodology.
[Complex method of the diagnosis of the mandibular injury based of informational technologies].
Korotkikh, N G; Bakhmet'ev, V I; Shalaev, O Iu; Antimenko, O O
2004-01-01
Special method of complex diagnosis of mandibular injury is under consideration. It is based on the informational technologies. The study of the mechanisms of the injury's formation has been made on 109 patients of the skull-jaw-facial surgery department of the hospital were under investigation. Two main types of jaw-facial injuries have been revealed. The first type: falling down from the height of one's own size (stature). The second type: blow (stroke) of a blunt object. The decrease of the number of the inflammatory complications of the broken jaw due to the usage of new algorithms on the basis of informational technologies has been noted.
Do the Visual Complexity Algorithms Match the Generalization Process in Geographical Displays?
NASA Astrophysics Data System (ADS)
Brychtová, A.; Çöltekin, A.; Pászto, V.
2016-06-01
In this study, we first develop a hypothesis that existing quantitative visual complexity measures will overall reflect the level of cartographic generalization, and test this hypothesis. Specifically, to test our hypothesis, we first selected common geovisualization types (i.e., cartographic maps, hybrid maps, satellite images and shaded relief maps) and retrieved examples as provided by Google Maps, OpenStreetMap and SchweizMobil by swisstopo. Selected geovisualizations vary in cartographic design choices, scene contents and different levels of generalization. Following this, we applied one of Rosenholtz et al.'s (2007) visual clutter algorithms to obtain quantitative visual complexity scores for screenshots of the selected maps. We hypothesized that visual complexity should be constant across generalization levels, however, the algorithm suggested that the complexity of small-scale displays (less detailed) is higher than those of large-scale (high detail). We also observed vast differences in visual complexity among maps providers, which we attribute to their varying approaches towards the cartographic design and generalization process. Our efforts will contribute towards creating recommendations as to how the visual complexity algorithms could be optimized for cartographic products, and eventually be utilized as a part of the cartographic design process to assess the visual complexity.
Infrared image non-rigid registration based on regional information entropy demons algorithm
NASA Astrophysics Data System (ADS)
Lu, Chaoliang; Ma, Lihua; Yu, Ming; Cui, Shumin; Wu, Qingrong
2015-02-01
Infrared imaging fault detection which is treated as an ideal, non-contact, non-destructive testing method is applied to the circuit board fault detection. Since Infrared images obtained by handheld infrared camera with wide-angle lens have both rigid and non-rigid deformations. To solve this problem, a new demons algorithm based on regional information entropy was proposed. The new method overcame the shortcomings of traditional demons algorithm that was sensitive to the intensity. First, the information entropy image was gotten by computing regional information entropy of the image. Then, the deformation between the two images was calculated that was the same as demons algorithm. Experimental results demonstrated that the proposed algorithm has better robustness in intensity inconsistent images registration compared with the traditional demons algorithm. Achieving accurate registration between intensity inconsistent infrared images provided strong support for the temperature contrast.
ERIC Educational Resources Information Center
Chen, Hsinchun
1995-01-01
Presents an overview of artificial-intelligence-based inductive learning techniques and their use in information science research. Three methods are discussed: the connectionist Hopfield network; the symbolic ID3/ID5R; evolution-based genetic algorithms. The knowledge representations and algorithms of these methods are examined in the context of…
Dynamics of information diffusion and its applications on complex networks
NASA Astrophysics Data System (ADS)
Zhang, Zi-Ke; Liu, Chuang; Zhan, Xiu-Xiu; Lu, Xin; Zhang, Chu-Xu; Zhang, Yi-Cheng
2016-09-01
The ongoing rapid expansion of the Word Wide Web (WWW) greatly increases the information of effective transmission from heterogeneous individuals to various systems. Extensive research for information diffusion is introduced by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and empirical studies, unification and comparison of different theories and approaches are lacking, which impedes further advances. In this article, we review recent developments in information diffusion and discuss the major challenges. We compare and evaluate available models and algorithms to respectively investigate their physical roles and optimization designs. Potential impacts and future directions are discussed. We emphasize that information diffusion has great scientific depth and combines diverse research fields which makes it interesting for physicists as well as interdisciplinary researchers.
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-01-01
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links. PMID:28282899
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
NASA Astrophysics Data System (ADS)
Vargas, David L.
Emerging quantum simulator technologies provide a new challenge to quantum many body theory. Quantifying the emergent order in and predicting the dynamics of such complex quantum systems requires a new approach. We develop such an approach based on complex network analysis of quantum mutual information. First, we establish the usefulness of quantum mutual information complex networks by reproducing the phase diagrams of transverse Ising and Bose-Hubbard models. By quantifying the complexity of quantum cellular automata we then demonstrate the applicability of complex network theory to non-equilibrium quantum dynamics. We conclude with a study of student collaboration networks, correlating a student's role in a collaboration network with their grades. This work thus initiates a quantitative theory of quantum complexity and provides a new tool for physics education research. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Yan, Menglong; Blaschke, Thomas; Tang, Hongzhao; Xiao, Chenchao; Sun, Xian; Zhang, Daobing; Fu, Kun
2017-03-01
Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Generating seed points is an initial step for most filtering algorithms, whereas existing algorithms usually define a regular window size to generate seed points. This may lead to an inadequate density of seed points, and further introduce error type I, especially in steep terrain and forested areas. In this study, we propose the use of objectbased analysis to derive surface complexity information from ALS datasets, which can then be used to improve seed point generation.We assume that an area is complex if it is composed of many small objects, with no buildings within the area. Using these assumptions, we propose and implement a new segmentation algorithm based on a grid index, which we call the Edge and Slope Restricted Region Growing (ESRGG) algorithm. Surface complexity information is obtained by statistical analysis of the number of objects derived by segmentation in each area. Then, for complex areas, a smaller window size is defined to generate seed points. Experimental results show that the proposed algorithm could greatly improve the filtering results in complex areas, especially in steep terrain and forested areas.
Holledge gauge failure testing using concurrent information processing algorithm
Weeks, G.E.; Daniel, W.E.; Edwards, R.E.; Jannarone, R.J.; Joshi, S.N.; Palakodety, S.S.; Qian, D.
1996-04-11
For several decades, computerized information processing systems and human information processing models have developed with a good deal of mutual influence. Any comprehensive psychology text in this decade uses terms that originated in the computer industry, such as ``cache`` and ``memory``, to describe human information processing. Likewise, many engineers today are using ``artificial intelligence``and ``artificial neural network`` computing tools that originated as models of human thought to solve industrial problems. This paper concerns a recently developed human information processing model, called ``concurrent information processing`` (CIP), and a related set of computing tools for solving industrial problems. The problem of focus is adaptive gauge monitoring; the application is pneumatic pressure repeaters (Holledge gauges) used to measure liquid level and density in the Defense Waste Processing Facility and the Integrated DWPF Melter System.
NASA Technical Reports Server (NTRS)
Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.
1992-01-01
Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit
Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph
2015-09-01
The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.
Hunt, Simon; Meng, Qinggang; Hinde, Chris; Huang, Tingwen
2014-01-01
This paper looks at consensus algorithms for agent cooperation with unmanned aerial vehicles. The foundation is the consensus-based bundle algorithm, which is extended to allow multi-agent tasks requiring agents to cooperate in completing individual tasks. Inspiration is taken from the cognitive behaviours of eusocial animals for cooperation and improved assignments. Using the behaviours observed in bees and ants inspires decentralised algorithms for groups of agents to adapt to changing task demand. Further extensions are provided to improve task complexity handling by the agents with added equipment requirements and task dependencies. We address the problems of handling these challenges and improve the efficiency of the algorithm for these requirements, whilst decreasing the communication cost with a new data structure. The proposed algorithm converges to a conflict-free, feasible solution of which previous algorithms are unable to account for. Furthermore, the algorithm takes into account heterogeneous agents, deadlocking and a method to store assignments for a dynamical environment. Simulation results demonstrate reduced data usage and communication time to come to a consensus on multi-agent tasks.
Hoyer, Dirk; Frank, Birgit; Pompe, Bernd; Schmidt, Hendrik; Werdan, Karl; Müller-Werdan, Ursula; Baranowski, Rafal; Zebrowski, Jan J; Meissner, Winfried; Kletzin, Ulf; Adler, Daniela; Adler, Steffen; Blickhan, Reinhard
2006-07-01
In the last two decades conventional linear methods for biosignal analysis have been substantially extended by non-stationary, non-linear, and complexity approaches. So far, complexity is usually assessed with regard to one single time scale, disregarding complex physiology organised on different time scales. This shortcoming was overcome and medically evaluated by information flow functions developed in our research group in collaboration with several theoretical, experimental, and clinical partners. In the present work, the information flow is introduced and typical information flow characteristics are demonstrated. The prognostic value of autonomic information flow (AIF), which reflects communication in the cardiovascular system, was shown in patients with multiple organ dysfunction syndrome and in patients with heart failure. Gait information flow (GIF), which reflects communication in the motor control system during walking, was introduced to discriminate between controls and elderly patients suffering from low back pain. The applications presented for the theoretically based approach of information flow confirm its value for the identification of complex physiological systems. The medical relevance has to be confirmed by comprehensive clinical studies. These information flow measures substantially extend the established linear and complexity measures in biosignal analysis.
NASA Astrophysics Data System (ADS)
Popov, A.; Zolotarev, V.; Bychkov, S.
2016-11-01
This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.
Semantic Predications for Complex Information Needs in Biomedical Literature
Cameron, Delroy; Kavuluru, Ramakanth; Bodenreider, Olivier; Mendes, Pablo N.; Sheth, Amit P.; Thirunarayan, Krishnaprasad
2015-01-01
Many complex information needs that arise in biomedical disciplines require exploring multiple documents in order to obtain information. While traditional information retrieval techniques that return a single ranked list of documents are quite common for such tasks, they may not always be adequate. The main issue is that ranked lists typically impose a significant burden on users to filter out irrelevant documents. Additionally, users must intuitively reformulate their search query when relevant documents have not been not highly ranked. Furthermore, even after interesting documents have been selected, very few mechanisms exist that enable document-to-document transitions. In this paper, we demonstrate the utility of assertions extracted from biomedical text (called semantic predications) to facilitate retrieving relevant documents for complex information needs. Our approach offers an alternative to query reformulation by establishing a framework for transitioning from one document to another. We evaluate this novel knowledge-driven approach using precision and recall metrics on the 2006 TREC Genomics Track. PMID:25699291
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks.
Information theoretical quantification of cooperativity in signalling complexes
Lenaerts, Tom; Ferkinghoff-Borg, Jesper; Schymkowitz, Joost; Rousseau, Frederic
2009-01-01
Background Intra-cellular information exchange, propelled by cascades of interacting signalling proteins, is essential for the proper functioning and survival of cells. Now that the interactome of several organisms is being mapped and several structural mechanisms of cooperativity at the molecular level in proteins have been elucidated, the formalization of this fundamental quantity, i.e. information, in these very diverse biological contexts becomes feasible. Results We show here that Shannon's mutual information quantifies information in biological system and more specifically the cooperativity inherent to the assembly of macromolecular complexes. We show how protein complexes can be considered as particular instances of noisy communication channels. Further we show, using a portion of the p27 regulatory pathway, how classical equilibrium thermodynamic quantities such as binding affinities and chemical potentials can be used to quantify information exchange but also to determine engineering properties such as channel noise and channel capacity. As such, this information measure identifies and quantifies those protein concentrations that render the biochemical system most effective in switching between the active and inactive state of the intracellular process. Conclusion The proposed framework provides a new and original approach to analyse the effects of cooperativity in the assembly of macromolecular complexes. It shows the conditions, provided by the protein concentrations, for which a particular system acts most effectively, i.e. exchanges the most information. As such this framework opens the possibility of grasping biological qualities such as system sensitivity, robustness or plasticity directly in terms of their effect on information exchange. Although these parameters might also be derived using classical thermodynamic parameters, a recasting of biological signalling in terms of information exchange offers an alternative framework for visualising network
A fully complex-valued radial basis function network and its learning algorithm.
Savitha, R; Suresh, S; Sundararajan, N
2009-08-01
In this paper, a fully complex-valued radial basis function (FC-RBF) network with a fully complex-valued activation function has been proposed, and its complex-valued gradient descent learning algorithm has been developed. The fully complex activation function, sech(.) of the proposed network, satisfies all the properties needed for a complex-valued activation function and has Gaussian-like characteristics. It maps C(n) --> C, unlike the existing activation functions of complex-valued RBF network that maps C(n) --> R. Since the performance of the complex-RBF network depends on the number of neurons and initialization of network parameters, we propose a K-means clustering based neuron selection and center initialization scheme. First, we present a study on convergence using complex XOR problem. Next, we present a synthetic function approximation problem and the two-spiral classification problem. Finally, we present the results for two practical applications, viz., a non-minimum phase equalization and an adaptive beam-forming problem. The performance of the network was compared with other well-known complex-valued RBF networks available in literature, viz., split-complex CRBF, CMRAN and the CELM. The results indicate that the proposed fully complex-valued network has better convergence, approximation and classification ability.
The Geometric Cluster Algorithm: Rejection-Free Monte Carlo Simulation of Complex Fluids
NASA Astrophysics Data System (ADS)
Luijten, Erik
2005-03-01
The study of complex fluids is an area of intense research activity, in which exciting and counter-intuitive behavior continue to be uncovered. Ironically, one of the very factors responsible for such interesting properties, namely the presence of multiple relevant time and length scales, often greatly complicates accurate theoretical calculations and computer simulations that could explain the observations. We have recently developed a new Monte Carlo simulation methodootnotetextJ. Liu and E. Luijten, Phys. Rev. Lett.92, 035504 (2004); see also Physics Today, March 2004, pp. 25--27. that overcomes this problem for several classes of complex fluids. Our approach can accelerate simulations by orders of magnitude by introducing nonlocal, collective moves of the constituents. Strikingly, these cluster Monte Carlo moves are proposed in such a manner that the algorithm is rejection-free. The identification of the clusters is based upon geometric symmetries and can be considered as the off-latice generalization of the widely-used Swendsen--Wang and Wolff algorithms for lattice spin models. While phrased originally for complex fluids that are governed by the Boltzmann distribution, the geometric cluster algorithm can be used to efficiently sample configurations from an arbitrary underlying distribution function and may thus be applied in a variety of other areas. In addition, I will briefly discuss various extensions of the original algorithm, including methods to influence the size of the clusters that are generated and ways to introduce density fluctuations.
Incremental Multi-Scale Search Algorithm for Dynamic Path Planning With Low Worst-Case Complexity.
Yibiao Lu; Xiaoming Huo; Arslan, O; Tsiotras, P
2011-12-01
Path-planning (equivalently, path-finding) problems are fundamental in many applications, such as transportation, VLSI design, robot navigation, and many more. In this paper, we consider dynamic shortest path-planning problems on a graph with a single endpoint pair and with potentially changing edge weights over time. Several algorithms exist in the literature that solve this problem, notably among them the Lifelong Planning algorithm. The algorithm is an incremental search algorithm that replans the path when there are changes in the environment. In numerical experiments, however, it was observed that the performance of is sensitive in the number of vertex expansions required to update the graph when an edge weight value changes or when a vertex is added or deleted. Although, in most cases, the classical requires a relatively small number of updates, in some other cases the amount of work required by the to find the optimal path can be overwhelming. To address this issue, in this paper, we propose an extension of the baseline algorithm, by making efficient use of a multiscale representation of the environment. This multiscale representation allows one to quickly localize the changed edges, and subsequently update the priority queue efficiently. This incremental multiscale ( for short) algorithm leads to an improvement both in terms of robustness and computational complexity-in the worst case-when compared to the classical . Numerical experiments validate the aforementioned claims.
Li, Peng; He, Tingting; Hu, Xiaohua; Zhao, Junmin; Shen, Xianjun; Zhang, Ming; Wang, Yan
2014-06-01
A novel algorithm based on Connected Affinity Clique Extension (CACE) for mining overlapping functional modules in protein interaction network is proposed in this paper. In this approach, the value of protein connected affinity which is inferred from protein complexes is interpreted as the reliability and possibility of interaction. The protein interaction network is constructed as a weighted graph, and the weight is dependent on the connected affinity coefficient. The experimental results of our CACE in two test data sets show that the CACE can detect the functional modules much more effectively and accurately when compared with other state-of-art algorithms CPM and IPC-MCE.
A direct D-bar reconstruction algorithm for recovering a complex conductivity in 2D
NASA Astrophysics Data System (ADS)
Hamilton, S. J.; Herrera, C. N. L.; Mueller, J. L.; Von Herrmann, A.
2012-09-01
A direct reconstruction algorithm for complex conductivities in W2, ∞(Ω), where Ω is a bounded, simply connected Lipschitz domain in { R}^2, is presented. The framework is based on the uniqueness proof by Francini (2000 Inverse Problems 6 107-19), but equations relating the Dirichlet-to-Neumann to the scattering transform and the exponentially growing solutions are not present in that work, and are derived here. The algorithm constitutes the first D-bar method for the reconstruction of conductivities and permittivities in two dimensions. Reconstructions of numerically simulated chest phantoms with discontinuities at the organ boundaries are included.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Representing Uncertain Geographical Information with Algorithmic Map Caricatures
NASA Astrophysics Data System (ADS)
Brunsdon, Chris
2016-04-01
A great deal of geographical information - including the results ion data analysis - is imprecise in some way. For example the the results of geostatistical interpolation should consist not only of point estimates of the value of some quantity at points in space, but also of confidence intervals or standard errors of these estimators. Similarly, mappings of contour lines derived form such interpolations will also be characterised by uncertainty. However, most computerized cartography tools are designed to provide 'crisp' representations of geographical information, such as sharply drawn lines, or clearly delineated areas. In this talk, the use of 'fuzzy' or 'sketchy' cartographic tools will be demonstrated - where maps have a hand-drawn appearance and the degree of 'roughness' and other related characteristics can be used to convey the degree of uncertainty associated with estimated quantities being mapped. The tools used to do this are available as an R package, which will be described in the talk.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
PREFACE: Complex Networks: from Biology to Information Technology
NASA Astrophysics Data System (ADS)
Barrat, A.; Boccaletti, S.; Caldarelli, G.; Chessa, A.; Latora, V.; Motter, A. E.
2008-06-01
The field of complex networks is one of the most active areas in contemporary statistical physics. Ten years after seminal work initiated the modern study of networks, interest in the field is in fact still growing, as indicated by the ever increasing number of publications in network science. The reason for such a resounding success is most likely the simplicity and broad significance of the approach that, through graph theory, allows researchers to address a variety of different complex systems within a common framework. This special issue comprises a selection of contributions presented at the workshop 'Complex Networks: from Biology to Information Technology' held in July 2007 in Pula (Cagliari), Italy as a satellite of the general conference STATPHYS23. The contributions cover a wide range of problems that are currently among the most important questions in the area of complex networks and that are likely to stimulate future research. The issue is organised into four sections. The first two sections describe 'methods' to study the structure and the dynamics of complex networks, respectively. After this methodological part, the issue proceeds with a section on applications to biological systems. The issue closes with a section concentrating on applications to the study of social and technological networks. The first section, entitled Methods: The Structure, consists of six contributions focused on the characterisation and analysis of structural properties of complex networks: The paper Motif-based communities in complex networks by Arenas et al is a study of the occurrence of characteristic small subgraphs in complex networks. These subgraphs, known as motifs, are used to define general classes of nodes and their communities by extending the mathematical expression of the Newman-Girvan modularity. The same line of research, aimed at characterising network structure through the analysis of particular subgraphs, is explored by Bianconi and Gulbahce in Algorithm
Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems
NASA Astrophysics Data System (ADS)
Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing
2014-01-01
Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.
Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen
2012-10-01
This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.
Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.
ERIC Educational Resources Information Center
Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.
2003-01-01
Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)
Informational Complexity and Functional Activity of RNA Structures
Carothers, James M.; Oestreich, Stephanie C.; Davis, Jonathan H.
2004-01-01
Very little is known about the distribution of functional DNA, RNA, and protein molecules in sequence space. The question of how the number and complexity of distinct solutions to a particular biochemical problem varies with activity is an important aspect of this general problem. Here we present a comparison of the structures and activities of eleven distinct GTP-binding RNAs (aptamers). By experimentally measuring the amount of information required to specify each optimal binding structure, we show that defining a structure capable of 10-fold tighter binding requires approximately 10 additional bits of information. This increase in information content is equivalent to specifying the identity of five additional nucleotide positions and corresponds to an ∼1000-fold decrease in abundance in a sample of random sequences. We observe a similar relationship between structural complexity and activity in a comparison of two catalytic RNAs (ribozyme ligases), raising the possibility of a general relationship between the complexity of RNA structures and their functional activity. Describing how information varies with activity in other heteropolymers, both biological and synthetic, may lead to an objective means of comparing their functional properties. This approach could be useful in predicting the functional utility of novel heteropolymers. PMID:15099096
Marucci, Evandro A.; Neves, Leandro A.; Valêncio, Carlo R.; Pinto, Alex R.; Cansian, Adriano M.; de Souza, Rogeria C. G.; Shiyou, Yang; Machado, José M.
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
On the Fractality of Complex Networks: Covering Problem, Algorithms and Ahlfors Regularity
Wang, Lihong; Wang, Qin; Xi, Lifeng; Chen, Jin; Wang, Songjing; Bao, Liulu; Yu, Zhouyu; Zhao, Luming
2017-01-01
In this paper, we revisit the fractality of complex network by investigating three dimensions with respect to minimum box-covering, minimum ball-covering and average volume of balls. The first two dimensions are calculated through the minimum box-covering problem and minimum ball-covering problem. For minimum ball-covering problem, we prove its NP-completeness and propose several heuristic algorithms on its feasible solution, and we also compare the performance of these algorithms. For the third dimension, we introduce the random ball-volume algorithm. We introduce the notion of Ahlfors regularity of networks and prove that above three dimensions are the same if networks are Ahlfors regular. We also provide a class of networks satisfying Ahlfors regularity. PMID:28128289
On the Fractality of Complex Networks: Covering Problem, Algorithms and Ahlfors Regularity
NASA Astrophysics Data System (ADS)
Wang, Lihong; Wang, Qin; Xi, Lifeng; Chen, Jin; Wang, Songjing; Bao, Liulu; Yu, Zhouyu; Zhao, Luming
2017-01-01
In this paper, we revisit the fractality of complex network by investigating three dimensions with respect to minimum box-covering, minimum ball-covering and average volume of balls. The first two dimensions are calculated through the minimum box-covering problem and minimum ball-covering problem. For minimum ball-covering problem, we prove its NP-completeness and propose several heuristic algorithms on its feasible solution, and we also compare the performance of these algorithms. For the third dimension, we introduce the random ball-volume algorithm. We introduce the notion of Ahlfors regularity of networks and prove that above three dimensions are the same if networks are Ahlfors regular. We also provide a class of networks satisfying Ahlfors regularity.
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This
Information processing using a single dynamical node as complex system
Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.
2011-01-01
Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110
Hou Chin, Jia; Ratnavelu, Kuru
2017-01-01
Community structure is an important feature of a complex network, where detection of the community structure can shed some light on the properties of such a complex network. Amongst the proposed community detection methods, the label propagation algorithm (LPA) emerges as an effective detection method due to its time efficiency. Despite this advantage in computational time, the performance of LPA is affected by randomness in the algorithm. A modified LPA, called CLPA-GNR, was proposed recently and it succeeded in handling the randomness issues in the LPA. However, it did not remove the tendency for trivial detection in networks with a weak community structure. In this paper, an improved CLPA-GNR is therefore proposed. In the new algorithm, the unassigned and assigned nodes are updated synchronously while the assigned nodes are updated asynchronously. A similarity score, based on the Sørensen-Dice index, is implemented to detect the initial communities and for breaking ties during the propagation process. Constraints are utilised during the label propagation and community merging processes. The performance of the proposed algorithm is evaluated on various benchmark and real-world networks. We find that it is able to avoid trivial detection while showing substantial improvement in the quality of detection. PMID:28374836
Conway, Mike; Berg, Richard L.; Carrell, David; Denny, Joshua C.; Kho, Abel N.; Kullo, Iftikhar J.; Linneman, James G.; Pacheco, Jennifer A.; Peissig, Peggy; Rasmussen, Luke; Weston, Noah; Chute, Christopher G.; Pathak, Jyotishman
2011-01-01
The need for formal representations of eligibility criteria for clinical trials – and for phenotyping more generally – has been recognized for some time. Indeed, the availability of a formal computable representation that adequately reflects the types of data and logic evidenced in trial designs is a prerequisite for the automatic identification of study-eligible patients from Electronic Health Records. As part of the wider process of representation development, this paper reports on an analysis of fourteen Electronic Health Record oriented phenotyping algorithms (developed as part of the eMERGE project) in terms of their constituent data elements, types of logic used and temporal characteristics. We discovered that the majority of eMERGE algorithms analyzed include complex, nested boolean logic and negation, with several dependent on cardinality constraints and complex temporal logic. Insights gained from the study will be used to augment the CDISC Protocol Representation Model. PMID:22195079
Ahl, Richard E; Keil, Frank C
2016-09-26
Four studies explored the abilities of 80 adults and 180 children (4-9 years), from predominantly middle-class families in the Northeastern United States, to use information about machines' observable functional capacities to infer their internal, "hidden" mechanistic complexity. Children as young as 4 and 5 years old used machines' numbers of functions as indications of complexity and matched machines performing more functions with more complex "insides" (Study 1). However, only older children (6 and older) and adults used machines' functional diversity alone as an indication of complexity (Studies 2-4). The ability to use functional diversity as a complexity cue therefore emerges during the early school years, well before the use of diversity in most categorical induction tasks.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
General A Scheme to Share Information via Employing Discrete Algorithm to Quantum States
NASA Astrophysics Data System (ADS)
Kang, Guo-Dong; Fang, Mao-Fa
2011-02-01
We propose a protocol for information sharing between two legitimate parties (Bob and Alice) via public-key cryptography. In particular, we specialize the protocol by employing discrete algorithm under mod that maps integers to quantum states via photon rotations. Based on this algorithm, we find that the protocol is secure under various classes of attacks. Specially, owe to the algorithm, the security of the classical privacy contained in the quantum public-key and the corresponding ciphertext is guaranteed. And the protocol is robust against the impersonation attack and the active wiretapping attack by designing particular checking processing, thus the protocol is valid.
Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.
Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector
2016-03-01
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.
Algorithmic information theory and the hidden variable question
NASA Technical Reports Server (NTRS)
Fuchs, Christopher
1992-01-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Deciphering the Minimal Algorithm for Development and Information-genesis
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Tang, Chao; Li, Hao
During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
A fuzzy Petri-net-based mode identification algorithm for fault diagnosis of complex systems
NASA Astrophysics Data System (ADS)
Propes, Nicholas C.; Vachtsevanos, George
2003-08-01
Complex dynamical systems such as aircraft, manufacturing systems, chillers, motor vehicles, submarines, etc. exhibit continuous and event-driven dynamics. These systems undergo several discrete operating modes from startup to shutdown. For example, a certain shipboard system may be operating at half load or full load or may be at start-up or shutdown. Of particular interest are extreme or "shock" operating conditions, which tend to severely impact fault diagnosis or the progression of a fault leading to a failure. Fault conditions are strongly dependent on the operating mode. Therefore, it is essential that in any diagnostic/prognostic architecture, the operating mode be identified as accurately as possible so that such functions as feature extraction, diagnostics, prognostics, etc. can be correlated with the predominant operating conditions. This paper introduces a mode identification methodology that incorporates both time- and event-driven information about the process. A fuzzy Petri net is used to represent the possible successive mode transitions and to detect events from processed sensor signals signifying a mode change. The operating mode is initialized and verified by analysis of the time-driven dynamics through a fuzzy logic classifier. An evidence combiner module is used to combine the results from both the fuzzy Petri net and the fuzzy logic classifier to determine the mode. Unlike most event-driven mode identifiers, this architecture will provide automatic mode initialization through the fuzzy logic classifier and robustness through the combining of evidence of the two algorithms. The mode identification methodology is applied to an AC Plant typically found as a component of a shipboard system.
Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions
NASA Astrophysics Data System (ADS)
Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.
We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.
Information driven self-organization of complex robotic behaviors.
Martius, Georg; Der, Ralf; Ay, Nihat
2013-01-01
Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.
Information Driven Self-Organization of Complex Robotic Behaviors
Martius, Georg; Der, Ralf; Ay, Nihat
2013-01-01
Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well. PMID:23723979
Mutual information model for link prediction in heterogeneous complex networks
Shakibian, Hadi; Moghadam Charkari, Nasrollah
2017-01-01
Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
NASA Astrophysics Data System (ADS)
Giannakis, Dimitrios; Majda, Andrew J.; Horenko, Illia
2012-10-01
Many problems in complex dynamical systems involve metastable regimes despite nearly Gaussian statistics with underlying dynamics that is very different from the more familiar flows of molecular dynamics. There is significant theoretical and applied interest in developing systematic coarse-grained descriptions of the dynamics, as well as assessing their skill for both short- and long-range prediction. Clustering algorithms, combined with finite-state processes for the regime transitions, are a natural way to build such models objectively from data generated by either the true model or an imperfect model. The main theme of this paper is the development of new practical criteria to assess the predictability of regimes and the predictive skill of such coarse-grained approximations through empirical information theory in stationary and periodically-forced environments. These criteria are tested on instructive idealized stochastic models utilizing K-means clustering in conjunction with running-average smoothing of the training and initial data for forecasts. A perspective on these clustering algorithms is explored here with independent interest, where improvement in the information content of finite-state partitions of phase space is a natural outcome of low-pass filtering through running averages. In applications with time-periodic equilibrium statistics, recently developed finite-element, bounded-variation algorithms for nonstationary autoregressive models are shown to substantially improve predictive skill beyond standard autoregressive models.
Encoding techniques for complex information structures in connectionist systems
NASA Technical Reports Server (NTRS)
Barnden, John; Srinivas, Kankanahalli
1990-01-01
Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.
Integrated computational and conceptual solutions for complex environmental information management
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2016-06-01
This paper presents the recent results of the integration of computational and conceptual solutions for the complex case of environmental information management. The solution for the major goal of creating and developing long-term multi-disciplinary knowledge resources and conceptual and computational support was achieved by implementing and integrating key components. The key components are long-term knowledge resources providing required structures for universal knowledge creation, documentation, and preservation, universal multi-disciplinary and multi-lingual conceptual knowledge and classification, especially, references to Universal Decimal Classification (UDC), sustainable workflows for environmental information management, and computational support for dynamical use, processing, and advanced scientific computing with Integrated Information and Computing System (IICS) components and High End Computing (HEC) resources.
Bayesian Case-deletion Model Complexity and Information Criterion
Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia
2015-01-01
We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578
Wu, Jian; Peng, Dao-Li
2011-04-01
The difference analysis of spectrum among tree species and the improvement of classification algorithm are the difficult points of extracting tree species information using remote sensing images, and are also the keys to improving the accuracy in the tree species information extraction in farmland returned to forests area. TM images were selected in this study, and the spectral indexes that could distinguish tree species information were filtered by analyzing tree species spectrum. Afterwards, the information of tree species was extracted using improved support vector machine algorithm. Although errors and confusion exist, this method shows satisfying results with an overall accuracy of 81.7%. The corresponding result of the traditional method is 72.5%. The method in this paper can achieve a more precise information extraction of tree species and the results can meet the demand of accurate monitoring and decision-making. This method is significant to the rapid assessment of project quality.
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
SNP Markers as Additional Information to Resolve Complex Kinship Cases
Pontes, M. Lurdes; Fondevila, Manuel; Laréu, Maria Victoria; Medeiros, Rui
2015-01-01
Summary Background DNA profiling with sets of highly polymorphic autosomal short tandem repeat (STR) markers has been applied in various aspects of human identification in forensic casework for nearly 20 years. However, in some cases of complex kinship investigation, the information provided by the conventionally used STR markers is not enough, often resulting in low likelihood ratio (LR) calculations. In these cases, it becomes necessary to increment the number of loci under analysis to reach adequate LRs. Recently, it has been proposed that single nucleotide polymorphisms (SNPs) could be used as a supportive tool to STR typing, eventually even replacing the methods/markers now employed. Methods In this work, we describe the results obtained in 7 revised complex paternity cases when applying a battery of STRs, as well as 52 human identification SNPs (SNPforID 52plex identification panel) using a SNaPshot methodology followed by capillary electrophoresis. Results Our results show that the analysis of SNPs, as complement to STR typing in forensic casework applications, would at least increase by a factor of 4 total PI values and correspondent Essen-Möller's W value. Conclusions We demonstrated that SNP genotyping could be a key complement to STR information in challenging casework of disputed paternity, such as close relative individualization or complex pedigrees subject to endogamous relations. PMID:26733770
Statistical physics of networks, information and complex systems
Ecke, Robert E
2009-01-01
In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.
A Result on the Computational Complexity of Heuristic Estimates for the A Algorithm.
1983-01-01
compare these algorithms according to the criterion ’number of node expansions," which is discussed and general - ly accepted in the published...alla Teoria doi Problemi." i i 4 a..... . - 22 - P__ e£jnjs of AICA 1980, Bari, Italy, 177-193 (in Italian). [HNRaph68] Hart, Peter A., Nils J. Nilsson...Intoijigence, 15 (1980), pp. 241-254. [Kibler82] Kibler, Dennis. "Natural Generation of Admissible Heuristics." Technical Report TR-188, Information and
NASA Astrophysics Data System (ADS)
Khorasanizade, Sh.; Sousa, J. M. M.
2016-03-01
A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
[Detection of QRS complexes using wavelet transformation and golden section search algorithm].
Chen, Wenli; Mo, Zhiwen; Guo, Wen
2009-08-01
The extraction and identification of ECG (electrocardiogram) signal characteristic parameters are the basis of ECG analysis and diagnosis. The fast and precise detection of QRS complexes is very important in ECG signal analysis; for it is a pre-requisite for the correlative parameters calculation as well as for correct diagnosis. In our work, firstly, the modulus maximum of wavelet transform is applied to the QRS complexes detection from ECG signal. Once there are mis-detections or missed detections happening, we utilize the Golden Section Search algorithm to adjust the threshold of maxima determination. The correct detection rate of the QRS complexes is up to 99.6% based on MIT-BIH ECG data.
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Huber, Felix
2015-10-01
Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded
Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.
Bae, Juhee; Watson, Benjamin
2014-12-01
In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator.
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of
Minimal classical communication and measurement complexity for quantum information splitting
NASA Astrophysics Data System (ADS)
Zhang, Zhan-jun; Cheung, Chi-Yee
2008-01-01
We present two quantum information splitting schemes using respectively tripartite GHZ and asymmetric W states as quantum channels. We show that if the secret state is chosen from a special ensemble and known to the sender (Alice), then she can split and distribute it to the receivers Bob and Charlie by performing only a single-qubit measurement and broadcasting a one-cbit message. It is clear that no other schemes could possibly achieve the same goal with simpler measurement and less classical communication. In comparison, existing schemes work for arbitrary quantum states which need not be known to Alice; however she is required to perform a two-qubit Bell measurement and communicate a two-cbit message. Hence there is a trade-off between flexibility and measurement complexity plus classical resource. In situations where our schemes are applicable, they will greatly reduce the measurement complexity and at the same time cut the communication overhead by one half.
Schulte-Herbrueggen, T.; Spoerl, A.; Glaser, S.J.; Khaneja, N.
2005-10-15
In this paper, we demonstrate how optimal control methods can be used to speed up the implementation of modules of quantum algorithms or quantum simulations in networks of coupled qubits. The gain is most prominent in realistic cases, where the qubits are not all mutually coupled. Thus the shortest times obtained depend on the coupling topology as well as on the characteristic ratio of the time scales for local controls vs nonlocal (i.e., coupling) evolutions in the specific experimental setting. Relating these minimal times to the number of qubits gives the tightest known upper bounds to the actual time complexity of the quantum modules. As will be shown, time complexity is a more realistic measure of the experimental cost than the usual gate complexity. In the limit of fast local controls (as, e.g., in NMR), time-optimized realizations are shown for the quantum Fourier transform (QFT) and the multiply controlled NOT gate (C{sup n-1}NOT) in various coupling topologies of n qubits. The speed-ups are substantial: in a chain of six qubits the quantum Fourier transform so far obtained by optimal control is more than eight times faster than the standard decomposition into controlled phase, Hadamard and SWAP gates, while the C{sup n-1}NOT gate for a completely coupled network of six qubits is nearly seven times faster.
Heidt, Alexander M; Spangenberg, Dirk-Mathys; Brügmann, Michael; Rohwer, Erich G; Feurer, Thomas
2016-11-01
We demonstrate that time-domain ptychography, a recently introduced iterative ultrafast pulse retrieval algorithm, has properties well suited for the reconstruction of complex light pulses with large time-bandwidth products from a cross-correlation frequency-resolved optical gating (XFROG) measurement. It achieves temporal resolution on the scale of a single optical cycle using long probe pulses and low sampling rates. In comparison to existing algorithms, ptychography minimizes the data to be recorded and processed, and significantly reduces the computational time of the reconstruction. Experimentally, we measure the temporal waveform of an octave-spanning, 3.5 ps long, supercontinuum pulse generated in photonic crystal fiber, resolving features as short as 5.7 fs with sub-fs resolution and 30 dB dynamic range using 100 fs probe pulses and similarly large delay steps.
Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Chen, Chao; Xie, Yun; Dai, Zong; Zou, Xiao-Yong
2013-04-05
In the post-genome era, one of the most important and challenging tasks is to identify the subcellular localizations of protein complexes, and further elucidate their functions in human health with applications to understand disease mechanisms, diagnosis and therapy. Although various experimental approaches have been developed and employed to identify the subcellular localizations of protein complexes, the laboratory technologies fall far behind the rapid accumulation of protein complexes. Therefore, it is highly desirable to develop a computational method to rapidly and reliably identify the subcellular localizations of protein complexes. In this study, a novel method is proposed for predicting subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm. Protein complexes are modeled as weighted graphs containing nodes and edges, where nodes represent proteins, edges represent protein-protein interactions and weights are descriptors of protein primary structures. Some topological structure features are proposed and adopted to characterize protein complexes based on graph theory. Random forest is employed to construct a model and predict subcellular localizations of protein complexes. Accuracies on a training set by a 10-fold cross-validation test for predicting plasma membrane/membrane attached, cytoplasm and nucleus are 84.78%, 71.30%, and 82.00%, respectively. And accuracies for the independent test set are 81.31%, 69.95% and 81.00%, respectively. These high prediction accuracies exhibit the state-of-the-art performance of the current method. It is anticipated that the proposed method may become a useful high-throughput tool and plays a complementary role to the existing experimental techniques in identifying subcellular localizations of mammalian protein complexes. The source code of Matlab and the dataset can be obtained freely on request from the authors.
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
Manikandan, P; Ramyachitra, D; Banupriya, D
2016-04-15
Proteins show their functional activity by interacting with other proteins and forms protein complexes since it is playing an important role in cellular organization and function. To understand the higher order protein organization, overlapping is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Most of the clustering algorithms do not consider the weighted as well as overlapping complexes. In this research, Prorank based Fuzzy algorithm has been proposed to find the overlapping protein complexes. The Fuzzy detection algorithm is incorporated in the Prorank algorithm after ranking step to find the overlapping community. The proposed algorithm executes in an iterative manner to compute the probability of robust clusters. The proposed and the existing algorithms were tested on different datasets such as PPI-D1, PPI-D2, Collins, DIP, Krogan Core and Krogan-Extended, gene expression such as GSE7645, GSE22269, GSE26923, pathways such as Meiosis, MAPK, Cell Cycle, phenotypes such as Yeast Heterogeneous and Yeast Homogeneous datasets. The experimental results show that the proposed algorithm predicts protein complexes with better accuracy compared to other state of art algorithms.
Complexity vs. simplicity: groundwater model ranking using information criteria.
Engelhardt, I; De Aguinaga, J G; Mikat, H; Schüth, C; Liedl, R
2014-01-01
A groundwater model characterized by a lack of field data about hydraulic model parameters and boundary conditions combined with many observation data sets for calibration purpose was investigated concerning model uncertainty. Seven different conceptual models with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike information criterion (AIC and AICc), Bayesian information criterion (BIC), and Kashyap's information criterion (KIC) were calculated for a set of seven inverse calibrated models with increasing complexity. Finally, the likelihood of each model was computed. Comparing only residuals of the different conceptual models leads to an overparameterization and certainty loss in the conceptual model approach. The model employing only uncalibrated hydraulic parameters, estimated from sedimentological information, obtained the worst AIC, BIC, and KIC values. Using only sedimentological data to derive hydraulic parameters introduces a systematic error into the simulation results and cannot be recommended for generating a valuable model. For numerical investigations with high numbers of calibration data the BIC and KIC select as optimal a simpler model than the AIC. The model with 15 adjusted parameters was evaluated by AIC as the best option and obtained a likelihood of 98%. The AIC disregards the potential model structure error and the selection of the KIC is, therefore, more appropriate. Sensitivities to piezometric heads were highest for the model with only five adjustable parameters and sensitivity coefficients were directly influenced by the changes in extracted groundwater volumes.
NASA Astrophysics Data System (ADS)
Zhang, Chun; Fei, Shu-Min; Zhou, Xing-Peng
2012-12-01
In this paper, we explore the technology of tracking a group of targets with correlated motions in a wireless sensor network. Since a group of targets moves collectively and is restricted within a limited region, it is not worth consuming scarce resources of sensors in computing the trajectory of each single target. Hence, in this paper, the problem is modeled as tracking a geographical continuous region covered by all targets. A tracking algorithm is proposed to estimate the region covered by the target group in each sampling period. Based on the locations of sensors and the azimuthal angle of arrival (AOA) information, the estimated region covering all the group members is obtained. Algorithm analysis provides the fundamental limits to the accuracy of localizing a target group. Simulation results show that the proposed algorithm is superior to the existing hull algorithm due to the reduction in estimation error, which is between 10% and 40% of the hull algorithm, with a similar density of sensors. And when the density of sensors increases, the localization accuracy of the proposed algorithm improves dramatically.
Deconvolution of complex spectra into components by the bee swarm algorithm
NASA Astrophysics Data System (ADS)
Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh
2016-05-01
The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.
NASA Astrophysics Data System (ADS)
Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido
2015-12-01
The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks.
Bhattacharya, Mahua; Das, Arpita
2011-01-01
Medical image fusion has been used to derive the useful complimentary information from multimodal images. The prior step of fusion is registration or proper alignment of test images for accurate extraction of detail information. For this purpose, the images to be fused are geometrically aligned using mutual information (MI) as similarity measuring metric followed by genetic algorithm to maximize MI. The proposed fusion strategy incorporating multi-resolution approach extracts more fine details from the test images and improves the quality of composite fused image. The proposed fusion approach is independent of any manual marking or knowledge of fiducial points and starts the procedure automatically. The performance of proposed genetic-based fusion methodology is compared with fuzzy clustering algorithm-based fusion approach, and the experimental results show that genetic-based fusion technique improves the quality of the fused image significantly over the fuzzy approaches.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Exploring the Structural Complexity of Intermetallic Compounds by an Adaptive Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zhao, X.; Nguyen, M. C.; Zhang, W. Y.; Wang, C. Z.; Kramer, M. J.; Sellmyer, D. J.; Li, X. Z.; Zhang, F.; Ke, L. Q.; Antropov, V. P.; Ho, K. M.
2014-01-01
Solving the crystal structures of novel phases with nanoscale dimensions resulting from rapid quenching is difficult due to disorder and competing polymorphic phases. Advances in computer speed and algorithm sophistication have now made it feasible to predict the crystal structure of an unknown phase without any assumptions on the Bravais lattice type, atom basis, or unit cell dimensions, providing a novel approach to aid experiments in exploring complex materials with nanoscale grains. This approach is demonstrated by solving a long-standing puzzle in the complex crystal structures of the orthorhombic, rhombohedral, and hexagonal polymorphs close to the Zr2Co11 intermetallic compound. From our calculations, we identified the hard magnetic phase and the origin of high coercivity in this compound, thus guiding further development of these materials for use as high performance permanent magnets without rare-earth elements.
Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm.
Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin
2015-01-01
With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods.
Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm
Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin
2015-01-01
With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071
NASA Astrophysics Data System (ADS)
Terekhov, Andrew V.
2015-04-01
A spectral-difference parallel algorithm for modeling acoustic and elastic wave fields for the 2.5D geometry in the presence of irregular surface topography is considered. The initial boundary-value problem is transformed to a series of boundary-value problems for elliptic equations via the integral Laguerre transform with respect to time. For solving difference equations, it is proposed to use efficient parallel procedures based on the fast Fourier transform and the dichotomy algorithm, the latter was designed for solving systems of linear algebraic equations (SLAEs) with tridiagonal and block-tridiagonal matrices. A modification of the dichotomy algorithm for diagonally dominant matrices, which makes it possible to reduce the time of preparatory computations and increase scalability of the method relative to the number of processors, is considered. The influence of different methods of curved boundary approximation on the quality of solution is investigated; practical evaluation of accuracy is performed. Calculations of the wave field with the use of high-resolution meshes for the Canadian Foothills medium model are presented. Implementation of the complex frequency-shifted PML boundary conditions for a dynamic elasticity problem is considered in the context of the spectral-difference approach.
A multi-agent genetic algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Li, Zhangtao; Liu, Jing
2016-05-01
Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.
An algorithm to find critical execution paths of software based on complex network
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Bing; Ren, Rong; Ren, Jiadong
2015-01-01
The critical execution paths play an important role in software system in terms of reducing the numbers of test date, detecting the vulnerabilities of software structure and analyzing software reliability. However, there are no efficient methods to discover them so far. Thus in this paper, a complex network-based software algorithm is put forward to find critical execution paths (FCEP) in software execution network. First, by analyzing the number of sources and sinks in FCEP, software execution network is divided into AOE subgraphs, and meanwhile, a Software Execution Network Serialization (SENS) approach is designed to generate execution path set in each AOE subgraph, which not only reduces ring structure's influence on path generation, but also guarantees the nodes' integrity in network. Second, according to a novel path similarity metric, similarity matrix is created to calculate the similarity among sets of path sequences. Third, an efficient method is taken to cluster paths through similarity matrices, and the maximum-length path in each cluster is extracted as the critical execution path. At last, a set of critical execution paths is derived. The experimental results show that the FCEP algorithm is efficient in mining critical execution path under software complex network.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
A Novel Algorithm for the Precise Calculation of the Maximal Information Coefficient
Zhang, Yi; Jia, Shili; Huang, Haiyun; Qiu, Jiqing; Zhou, Changjie
2014-01-01
Measuring associations is an important scientific task. A novel measurement method maximal information coefficient (MIC) was proposed to identify a broad class of associations. As foreseen by its authors, MIC implementation algorithm ApproxMaxMI is not always convergent to real MIC values. An algorithm called SG (Simulated annealing and Genetic) was developed to facilitate the optimal calculation of MIC, and the convergence of SG was proved based on Markov theory. When run on fruit fly data set including 1,000,000 pairs of gene expression profiles, the mean squared difference between SG and the exhaustive algorithm is 0.00075499, compared with 0.1834 in the case of ApproxMaxMI. The software SGMIC and its manual are freely available at http://lxy.depart.hebust.edu.cn/SGMIC/SGMIC.htm. PMID:25322794
NASA Astrophysics Data System (ADS)
Neophytou, Neophytos; Xu, Fang; Mueller, Klaus
2007-03-01
Three-dimensional computed tomography (CT) is a compute-intensive process, due to the large amounts of source and destination data, and this limits the speed at which a reconstruction can be obtained. There are two main approaches to cope with this problem: (i) lowering the overall computational complexity via algorithmic means, and/or (ii) running CT on specialized high-performance hardware. Since the latter requires considerable capital investment into rather inflexible hardware, the former option is all one has typically available in a traditional CPU-based computing environment. However, the emergence of programmable commodity graphics hardware (GPUs) has changed this situation in a decisive way. In this paper, we show that GPUs represent a commodity high-performance parallel architecture that resonates very well with the computational structure and operations inherent to CT. Using formal arguments as well as experiments we demonstrate that GPU-based 'brute-force' CT (i.e., CT at regular complexity) can be significantly faster than CPU-based as well as GPU-based CT with optimal complexity, at least for practical data sizes. Therefore, the answer to the title question: "Can GPU-based processing beat complexity optimization for CT?" is "Absolutely!"
Methods of Information Geometry to model complex shapes
NASA Astrophysics Data System (ADS)
De Sanctis, A.; Gattone, S. A.
2016-09-01
In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.
Zhu, Kai; Shirts, Michael R; Friesner, Richard A; Jacobson, Matthew P
2007-03-01
We optimize a truncated Newton (TN) minimization algorithm and computer package, TNPACK, developed for macromolecular minimizations by applying multiscale methods, analogous to those used in molecular dynamics (e.g., r-RESPA). The molecular mechanics forces are divided into short- and long-range components, with the long-range forces updated only intermittently in the iterative evaluations. This algorithm, which we refer to as MSTN, is implemented as a modification to the TNPACK package and is tested on energy minimizations of protein loops, entire proteins, and protein-ligand complexes and compared with the unmodified truncated Newton algorithm, a quasi-Newton algorithm (LBFGS), and a conjugate gradient algorithm (CG+). In vacuum minimizations, the speedup of MSTN relative to the unmodified TN algorithm (TNPACK) depends on system size and the distance cutoffs used for defining the short- and long-range interactions and the long-range force updating frequency, but it is 4 to 5 times greater in the work reported here. This algorithm works best for the minimization of small portions of a protein and shows some degradation (speedup factor of 2-3) for the minimization of entire proteins. The MSTN algorithm is faster than the quasi-Newton and conjugate gradient algorithms by approximately 1 order of magnitude. We also present a modification of the algorithm which permits minimizations with a generalized Born implicit solvent model, using a self-consistent procedure that increases the computational expense, relative to a vacuum, by only a small factor (∼3-4).
Study of information hiding algorithm based on GHM and color transfer theory
NASA Astrophysics Data System (ADS)
Ren, Shuai; Mu, De-Jun; Zhang, Tao; Hu, Wei
2009-11-01
Taking the feature that the energy of the image would gather and spread on four components ( LL 2, LH 2, HL 2 and HH 2) in the sub-image after first-order GHM multi-wavelet-transform. And by using the color control ability of lαβ color space in color transfer theory (CTT), an information hiding algorithm based on GHM-CTT is proposed. In this way, the robust parameters are embedded in the LL 2, the hidden information is set in LH 2 and HL 2 with RAID4, and fragile sign is set in HH 2. The consistence between the embedded data bits’ order and the embedded code of the sub-image is improved by using the chaotic mapping and the genetic algorithm. Experimental results indicate that the GHM-CTT can increase the imperceptibility by 15.72% averagely and robustness by 18.89% at least.
Reduced-complexity algorithms for data assimilation of large-scale systems
NASA Astrophysics Data System (ADS)
Chandrasekar, Jaganath
Data assimilation is the use of measurement data to improve estimates of the state of dynamical systems using mathematical models. Estimates from models alone are inherently imperfect due to the presence of unknown inputs that affect dynamical systems and model uncertainties. Thus, data assimilation is used in many applications: from satellite tracking to biological systems monitoring. As the complexity of the underlying model increases, so does the complexity of the data assimilation technique. This dissertation considers reduced-complexity algorithms for data assimilation of large-scale systems. For linear discrete-time systems, an estimator that injects data into only a specified subset of the state estimates is considered. Bounds on the performance of the new filter are obtained, and conditions that guarantee the asymptotic stability of the new filter for linear time-invariant systems are derived. We then derive a reduced-order estimator that uses a reduced-order model to propagate the estimator state using a finite-horizon cost, and hence solutions of algebraic Riccati and Lyapunov equations are not required. Finally, a reduced-rank square-root filter that propagates only a few columns of the square root of the state-error covariance is developed. Specifically, the columns are chosen from the Cholesky factor of the state-error covariance. Next, data assimilation algorithms for nonlinear systems is considered. We first compare the performance of two suboptimal estimation algorithms, the extended Kalman filter and unscented Kalman filter. To reduce the computational requirements, variations of the unscented Kalman filter with reduced ensemble are suggested. Specifically, a reduced-rank unscented Kalman filter is introduced whose ensemble members are chosen according to the Cholesky decomposition of the square root of the pseudo-error covariance. Finally, a reduced-order model is used to propagate the pseudo-error covariance, while the full-order model is used to
Leliaert, Frederik; Verbruggen, Heroen; Wysor, Brian; De Clerck, Olivier
2009-10-01
DNA-based taxonomy provides a convenient and reliable tool for species delimitation, especially in organisms in which morphological discrimination is difficult or impossible, such as many algal taxa. A group with a long history of confusing species circumscriptions is the morphologically plastic Boodlea complex, comprising the marine green algal genera Boodlea, Cladophoropsis, Phyllodictyon and Struveopsis. In this study, we elucidate species boundaries in the Boodlea complex by analysing nrDNA internal transcribed spacer sequences from 175 specimens collected from a wide geographical range. Algorithmic methods of sequence-based species delineation were applied, including statistical parsimony network analysis, and a maximum likelihood approach that uses a mixed Yule-coalescent model and detects species boundaries based on differences in branching rates at the level of species and populations. Sequence analyses resulted in the recognition of 13 phylogenetic species, although we failed to detect sharp species boundaries, possibly as a result of incomplete reproductive isolation. We found considerable conflict between traditional and phylogenetic species definitions. Identical morphological forms were distributed in different clades (cryptic diversity), and at the same time most of the phylogenetic species contained a mixture of different morphologies (indicating intraspecific morphological variation). Sampling outside the morphological range of the Boodlea complex revealed that the enigmatic, sponge-associated Cladophoropsis (Spongocladia) vaucheriiformis, also falls within the Boodlea complex. Given the observed evolutionary complexity and nomenclatural problems associated with establishing a Linnaean taxonomy for this group, we propose to discard provisionally the misleading morphospecies and genus names, and refer to clade numbers within a single genus, Boodlea.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Li, Bin
2016-03-01
Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.
Sera White
2012-04-01
This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.
A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries
Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P
2003-12-15
We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.
Daneshmand, Hadi; Gomez-Rodriguez, Manuel; Song, Le; Schölkopf, Bernhard
2015-01-01
Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees? Despite the increasing availability of cascade-data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuous-time diffusion models using an ℓ1-regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe O(d3 log N) cascades, where d is the maximum number of parents of a node and N is the total number of nodes. Moreover, we develop a simple and efficient soft-thresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice. PMID:25932466
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information
Bearing fault component identification using information gain and machine learning algorithms
NASA Astrophysics Data System (ADS)
Vinay, Vakharia; Kumar, Gupta Vijay; Kumar, Kankar Pavan
2015-04-01
In the present study an attempt has been made to identify various bearing faults using machine learning algorithm. Vibration signals obtained from faults in inner race, outer race, rolling element and combined faults are considered. Raw vibration signal cannot be used directly since vibration signals are masked by noise. To overcome this difficulty combined time frequency domain method such as wavelet transform is used. Further wavelet selection criteria based on minimum permutation entropy is employed to select most appropriate base wavelet. Statistical features from selected wavelet coefficients are calculated to form feature vector. To reduce size of feature vector information gain attribute selection method is employed. Modified feature set is fed in to machine learning algorithm such as random forest and self-organizing map for getting maximize fault identification efficiency. Results obtained revealed that attribute selection method shows improvement in fault identification accuracy of bearing components.
Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information
Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li
2016-01-01
Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918
VS-APPLE: A Virtual Screening Algorithm Using Promiscuous Protein-Ligand Complexes.
Okuno, Tatsuya; Kato, Koya; Terada, Tomoki P; Sasai, Masaki; Chikenji, George
2015-06-22
As the number of structurally resolved protein-ligand complexes increases, the ligand-binding pockets of many proteins have been found to accommodate multiple different compounds. Effective use of these structural data is important for developing virtual screening (VS) methods that identify bioactive compounds. Here, we introduce a VS method, VS-APPLE (Virtual Screening Algorithm using Promiscuous Protein-Ligand complExes), based on promiscuous protein-ligand binding structures. In VS-APPLE, multiple ligands bound to a pocket are combined into a query template for screening. Both the structural match between a test compound and the multiple-ligand template and the possible collisions between the test compound and the target protein are evaluated by an efficient geometric hashing method. The performance of VS-APPLE was examined on a filtered, clustered version of the Directory of Useful Decoys data set. In Area Under the Curve analyses of this data set, VS-APPLE outperformed several popular screening programs. Judging from the performance of VS-APPLE, the structural data of promiscuous protein-ligand bindings could be further analyzed and exploited for developing VS methods.
Applying complexity theory: a review to inform evaluation design.
Walton, Mat
2014-08-01
Complexity theory has increasingly been discussed and applied within evaluation literature over the past decade. This article reviews the discussion and use of complexity theory within academic journal literature. The aim is to identify the issues to be considered when applying complexity theory to evaluation. Reviewing 46 articles, two groups of themes are identified. The first group considers implications of applying complexity theory concepts for defining evaluation purpose, scope and units of analysis. The second group of themes consider methodology and method. Results provide a starting point for a configuration of an evaluation approach consistent with complexity theory, whilst also identifying a number of design considerations to be resolved within evaluation planning.
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
On the accuracy of a mutual information algorithm for PET-MR image registration
NASA Astrophysics Data System (ADS)
Karaiskos, P.; Malamitsi, J.; Andreou, J.; Prassopoulos, V.; Valotassiou, V.; Laspas, F.; Sandilos, P.; Torrens, M.
2009-07-01
Image registration has been increasingly used in radiation diagnosis and treatment planning as a means of information integration from different imaging modalities (e.g. MRI, PET, CT). Especially for brain lesions, accurate 3D registration and fusion of MR and PET images can provide comprehensive information about the patient under study by relating functional information from PET images to the detailed anatomical information available in MR images. However, direct PET-MR image fusion in soft tissue is complicated mainly due to the lack of conspicuous anatomical features in PET images. This study describes the implementation and validation of a mutual information registration algorithm for this purpose. Ten patients with brain lesions underwent MR and PET/CT scanning. MR-PET registration was performed a) based on the well validated MR-CT registration technique and copying the transformation to the PET images derived from the PET/CT scan (MR/PET/CT registration method) and b) directly from the MR and PET images without taking into account the CT images (MR/PET registration method). In order to check the registration accuracy of the MR/PET method, the lesion (target) was contoured in the PET images and it was transferred to the MR images using both the above methods. The MR/PET/CT method served as the gold standard for target contouring. Target contours derived by the MR/PET method were compared with the gold standard target contours for each patient and the deviation between the two contours was used to estimate the accuracy of the PET-MR registration method. This deviation was less than 3 mm (i.e. comparable to the imaging voxel of the PET/CT scanning) for 9/10 of the cases studied. Results show that the mutual information algorithm used is able to perform the PET-MR registration reliably and accurately.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
Kao, Chung-Feng; Chuang, Li-Chung; Kuo, Po-Hsiu
2014-10-01
Many susceptibility genes for complex traits were identified without conclusive findings. There is a strong need to integrate rapidly accumulated genomic data from multi-dimensional platforms, and to conduct risk evaluation for potential therapeutic and diagnostic usages. We set up an algorithm to computationally search for optimal weight-vector for various data sources, while minimized potential noises. Through gene-prioritization framework, combined scores for the resulting prioritized gene-set were calculated using a genome-wide association (GWA) dataset, following with evaluation using weighted genetic risk score and risk-attributed information using an independent GWA dataset. The significance of association of GWA data was corrected for gene length. Enriched functional pathways were identified for the prioritized gene-set using the Gene Ontology analysis. We illustrated our framework with bipolar disorder. 233 prioritized genes were identified from 10,830 candidates that curated from six platforms. The prioritized genes were significantly enriched (P(adjusted) < 1 × 10(-5)) in 18 biological functions and molecular mechanisms including membrane, synaptic transmission, transmission of nerve impulse, integral to membrane, and plasma membrane. Our risk evaluation demonstrated higher weighted genetic risk score in bipolar patients than controls (P-values ranged from 0.002 to 3.8 × 10(-6)). Substantial risk-information (71%) was extracted from prioritized genes for bipolar illness than other candidate-gene sets. Our evidence-based prioritized gene-set provides opportunity to explore the complex network and to conduct follow-up basic and clinical studies for complex traits.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record: information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Devine, Sean D
2016-02-01
Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour.
Gopinath, T; Kumar, Anil
2006-12-01
Hadamard spectroscopy has earlier been used to speed-up multi-dimensional NMR experiments. In this work, we speed-up the two-dimensional quantum computing scheme, by using Hadamard spectroscopy in the indirect dimension, resulting in a scheme which is faster and requires the Fourier transformation only in the direct dimension. Two and three qubit quantum gates are implemented with an extra observer qubit. We also use one-dimensional Hadamard spectroscopy for binary information storage by spatial encoding and implementation of a parallel search algorithm.
Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming
2007-08-31
Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.
Determining the Complexity of the Quantum Adiabatic Algorithm using Quantum Monte Carlo Simulations
2012-12-18
efficiently a quantum computer could solve optimization problems using the quantum adiabatic algorithm (QAA). Comparisons were made with a classical...Park, NC 27709-2211 15. SUBJECT TERMS Quantum Adiabatic Algorithm , Optimization, Monte Carlo, quantum computer, satisfiability problems, spin glass... quantum adiabatic algorithm (QAA). Comparisons were made with a classical heuristic algorithm , WalkSAT. A preliminary study was also made to see if the
2011-01-01
Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549
Application of Fisher Information to Complex Dynamic Systems (Tucson)
Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...
Application of Fisher Information to Complex Dynamic Systems
Fisher information was developed by the statistician Ronald Fisher as a measure of the information obtainable from data being used to fit a related parameter. Starting from the work of Ronald Fisher1 and B. Roy Frieden2, we have developed Fisher information as a measure of order ...
NASA Astrophysics Data System (ADS)
Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco
2016-04-01
Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.
Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo
2013-09-01
An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).
Cryptanalysis on a scheme to share information via employing a discrete algorithm to quantum states
NASA Astrophysics Data System (ADS)
Amellal, H.; Meslouhi, A.; El Baz, M.; Hassouni, Y.; El Allati, A.
2017-03-01
Recently, Yang and Hwang [Int. J. Theor. Phys. 53, 224 (2014)] demonstrated that the scheme to share information via employing discrete algorithm to quantum states presented by Kang and Fang [Commun. Theor. Phys. 55, 239 (2011)] suffers from a major vulnerability allowing an eavesdropper to perform a measurement and resend attack. By introducing an additional checking state framework, the authors have proposed an improved protocol to overcome this weakness. This work calls into question the invoked vulnerability in order to clarify a misinterpretation in the same protocol stages also introduce a possible leakage information strategy, known as a faked state attack, despite the proposed improvement, which means that the same security problem may persist. Finally, an upgrading technic was introduced in order to enhance the security transmission.
Network algorithmics and the emergence of information integration in cortical models
NASA Astrophysics Data System (ADS)
Nathan, Andre; Barbosa, Valmir C.
2011-07-01
An information-theoretic framework known as integrated information theory (IIT) has been introduced recently for the study of the emergence of consciousness in the brain [D. Balduzzi and G. Tononi, PLoS Comput. Biol.1553-734X10.1371/journal.pcbi.1000091 4, e1000091 (2008)]. IIT purports that this phenomenon is to be equated with the generation of information by the brain surpassing the information that the brain’s constituents already generate independently of one another. IIT is not fully plausible in its modeling assumptions nor is it testable due to severe combinatorial growth embedded in its key definitions. Here we introduce an alternative to IIT which, while inspired in similar information-theoretic principles, seeks to address some of IIT’s shortcomings to some extent. Our alternative framework uses the same network-algorithmic cortical model we introduced earlier [A. Nathan and V. C. Barbosa, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.021916 81, 021916 (2010)] and, to allow for somewhat improved testability relative to IIT, adopts the well-known notions of information gain and total correlation applied to a set of variables representing the reachability of neurons by messages in the model’s dynamics. We argue that these two quantities relate to each other in such a way that can be used to quantify the system’s efficiency in generating information beyond that which does not depend on integration. We give computational results on our cortical model and on variants thereof that are either structurally random in the sense of an Erdős-Rényi random directed graph or structurally deterministic. We have found that our cortical model stands out with respect to the others in the sense that many of its instances are capable of integrating information more efficiently than most of those others’ instances.
Network algorithmics and the emergence of information integration in cortical models.
Nathan, Andre; Barbosa, Valmir C
2011-07-01
An information-theoretic framework known as integrated information theory (IIT) has been introduced recently for the study of the emergence of consciousness in the brain [D. Balduzzi and G. Tononi, PLoS Comput. Biol. 4, e1000091 (2008)]. IIT purports that this phenomenon is to be equated with the generation of information by the brain surpassing the information that the brain's constituents already generate independently of one another. IIT is not fully plausible in its modeling assumptions nor is it testable due to severe combinatorial growth embedded in its key definitions. Here we introduce an alternative to IIT which, while inspired in similar information-theoretic principles, seeks to address some of IIT's shortcomings to some extent. Our alternative framework uses the same network-algorithmic cortical model we introduced earlier [A. Nathan and V. C. Barbosa, Phys. Rev. E 81, 021916 (2010)] and, to allow for somewhat improved testability relative to IIT, adopts the well-known notions of information gain and total correlation applied to a set of variables representing the reachability of neurons by messages in the model's dynamics. We argue that these two quantities relate to each other in such a way that can be used to quantify the system's efficiency in generating information beyond that which does not depend on integration. We give computational results on our cortical model and on variants thereof that are either structurally random in the sense of an Erdős-Rényi random directed graph or structurally deterministic. We have found that our cortical model stands out with respect to the others in the sense that many of its instances are capable of integrating information more efficiently than most of those others' instances.
A NEW FRAMEWORK FOR URBAN SUSTAINABILITY ASSESSMENTS: LINKING COMPLEXITY, INFORMATION AND POLICY
Urban systems emerge as distinct entities from the complex interactions among social, economic and cultural attributes, and information, energy and material stocks and flows that operate on different temporal and spatial scales. Such complexity poses a challenge to identify the...
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
An efficient algorithm to accelerate the discovery of complex material formulations
NASA Astrophysics Data System (ADS)
Brell, George; Li, Genyuan; Rabitz, Herschel
2010-05-01
The identification of complex multicomponent material formulations that possess specific optimal properties is a challenging task in materials discovery. The high dimensional composition space needs to be adequately sampled and the properties measured with the goal of efficiently identifying effective formulations. This task must also take into account mass fraction and possibly other constraints placed on the material components. Either combinatorial or noncombinatorial sampling of the composition space may be employed in practice. This paper introduces random sampling-high dimensional model representation (RS-HDMR) as an algorithmic tool to facilitate these nonlinear multivariate problems. RS-HDMR serves as a means to accurately interpolate over sampled materials, and simulations of the technique show that it can be very efficient. A variety of simulations is carried out modeling multicomponent→property relationships, and the results show that the number of sampled materials to attain a given level of accuracy for a predicted property does not significantly depend on the number of components in the formulation. Although RS-HDMR best operates in the laboratory by guided iterative rounds of random sampling of the composition space along with property observation, the technique was tested successfully on two existing databases of a seven component phosphor material and a four component deNOx catalyst for reduction of NO with C3H6.
NASA Astrophysics Data System (ADS)
Erlingis, J. M.; Gourley, J. J.; Kirstetter, P.; Anagnostou, E. N.; Kalogiros, J. A.; Anagnostou, M.
2015-12-01
An Intensive Observation Period (IOP) for the Integrated Precipitation and Hydrology Experiment (IPHEx), part of NASA's Ground Validation campaign for the Global Precipitation Measurement Mission satellite took place from May-June 2014 in the Smoky Mountains of western North Carolina. The National Severe Storms Laboratory's mobile dual-pol X-band radar, NOXP, was deployed in the Pigeon River Basin during this time and employed various scanning strategies, including more than 1000 Range Height Indicator (RHI) scans in coordination with another radar and research aircraft. Rain gauges and disdrometers were also positioned within the basin to verify precipitation estimates and estimation of microphysical parameters. The performance of the SCOP-ME post-processing algorithm on NOXP data is compared with real-time and near real-time precipitation estimates with varying spatial resolutions and quality control measures (Stage IV gauge-corrected radar estimates, Multi-Radar/Multi-Sensor System Quantitative Precipitation Estimates, and CMORPH satellite estimates) to assess the utility of a gap-filling radar in complex terrain. Additionally, the RHI scans collected in this IOP provide a valuable opportunity to examine the evolution of microphysical characteristics of convective and stratiform precipitation as they impinge on terrain. To further the understanding of orographically enhanced precipitation, multiple storms for which RHI data are available are considered.
NASA Astrophysics Data System (ADS)
Liao, Yen-Che; Kao, Honn; Rosenberger, Andreas; Hsu, Shu-Kun; Huang, Bor-Shouh
2012-06-01
Conventional earthquake location methods depend critically on the correct identification of seismic phases and their arrival times from seismograms. Accurate phase picking is particularly difficult for aftershocks that occur closely in time and space, mostly because of the ambiguity of correlating the same phase at different stations. In this study, we introduce an improved Source-Scanning Algorithm (ISSA) for the purpose of delineating the complex distribution of aftershocks without time-consuming and labour-intensive phase-picking procedures. The improvements include the application of a ground motion analyser to separate P and S waves, the automatic adjustment of time windows for 'brightness' calculation based on the scanning resolution and a modified brightness function to combine constraints from multiple phases. Synthetic experiments simulating a challenging scenario are conducted to demonstrate the robustness of the ISSA. The method is applied to a field data set selected from the ocean-bottom-seismograph records of an offshore aftershock sequence southwest of Taiwan. Although visual inspection of the seismograms is ambiguous, our ISSA analysis clearly delineates two events that can best explain the observed waveform pattern.
Diago, Luis A; Morell, Persy; Aguilera, Longendri; Moreno, Ernesto
2007-01-01
Background The number of algorithms available to predict ligand-protein interactions is large and ever-increasing. The number of test cases used to validate these methods is usually small and problem dependent. Recently, several databases have been released for further understanding of protein-ligand interactions, having the Protein Data Bank as backend support. Nevertheless, it appears to be difficult to test docking methods on a large variety of complexes. In this paper we report the development of a new database of protein-ligand complexes tailored for testing of docking algorithms. Methods Using a new definition of molecular contact, small ligands contained in the 2005 PDB edition were identified and processed. The database was enriched in molecular properties. In particular, an automated typing of ligand atoms was performed. A filtering procedure was applied to select a non-redundant dataset of complexes. Data mining was performed to obtain information on the frequencies of different types of atomic contacts. Docking simulations were run with the program DOCK. Results We compiled a large database of small ligand-protein complexes, enriched with different calculated properties, that currently contains more than 6000 non-redundant structures. As an example to demonstrate the value of the new database, we derived a new set of chemical matching rules to be used in the context of the program DOCK, based on contact frequencies between ligand atoms and points representing the protein surface, and proved their enhanced efficiency with respect to the default set of rules included in that program. Conclusion The new database constitutes a valuable resource for the development of knowledge-based docking algorithms and for testing docking programs on large sets of protein-ligand complexes. The new chemical matching rules proposed in this work significantly increase the success rate in DOCKing simulations. The database developed in this work is available at . PMID:17718923
NOAA's Honua: Visualizations of Complex Environmental Information in Formal and Informal Education
NASA Astrophysics Data System (ADS)
McBride, M. A.; Stovall, W. K.; Lewinski, S.; Bennett, S.
2010-12-01
The National Oceanic and Atmospheric Administration (NOAA) Pacific Services Center supports a data visualization program, called NOAA's Honua, for the presentation of geophysical processes and environmental data in both formal and informal education settings using 3-D technology. Many display systems are available for the virtual representation of global environmental data, including Google Earth, NASA World Wind, and ESRI's ArcGIS Explorer. All present global data on virtual 3-D platforms using industry standard vector and raster data sources. Other products project earth system data on 3-D spherical platforms: NOAA's Science on a Sphere, Global Imagination's Magic Planet, and the OmniGlobe spherical display system. The NOAA Pacific Services Center provides resources for formal education in the form of lesson plans that cover ocean, climate, and hazards science. Components of NOAA's Honua also utilize spherical display systems for public outreach in a variety of venues, including conferences, community events, and science learning centers. In these settings, NOAA's Honua combines written narratives and accompanying audio in an interactive kiosk. Web-based 3-D interactive components are available and complement both the formal and informal education components. The strength of this program is that complex geophysical processes are presented in intuitive and compelling formats that are readily accessible via the Internet and can be viewed at science centers and museums.
Work-Facilitating Information Visualization Techniques for Complex Wastewater Systems
NASA Astrophysics Data System (ADS)
Ebert, Achim; Einsfeld, Katja
The design and the operation of urban drainage systems and wastewater treatment plants (WWTP) have become increasingly complex. This complexity is due to increased requirements concerning process technology, technical, environmental, economical, and occupational safety aspects. The plant operator has access not only to some timeworn filers and measured parameters but also to numerous on-line and off-line parameters that characterize the current state of the plant in detail. Moreover, expert databases and specific support pages of plant manufactures are accessible through the World Wide Web. Thus, the operator is overwhelmed with predominantly unstructured data.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Nicolazzi, E L; Biffani, S; Jansen, G
2013-04-01
Routine genomic evaluations frequently include a preliminary imputation step, requiring high accuracy and reduced computing time. A new algorithm, PedImpute (http://dekoppel.eu/pedimpute/), was developed and compared with findhap (http://aipl.arsusda.gov/software/findhap/) and BEAGLE (http://faculty.washington.edu/browning/beagle/beagle.html), using 19,904 Holstein genotypes from a 4-country international collaboration (United States, Canada, UK, and Italy). Different scenarios were evaluated on a sample subset that included only single nucleotide polymorphism from the Bovine low-density (LD) Illumina BeadChip (Illumina Inc., San Diego, CA). Comparative criteria were computing time, percentage of missing alleles, percentage of wrongly imputed alleles, and the allelic squared correlation. Imputation accuracy on ungenotyped animals was also analyzed. The algorithm PedImpute was slightly more accurate and faster than findhap and BEAGLE when sire, dam, and maternal grandsire were genotyped at high density. On the other hand, BEAGLE performed better than both PedImpute and findhap for animals with at least one close relative not genotyped or genotyped at low density. However, computing time and resources using BEAGLE were incompatible with routine genomic evaluations in Italy. Error rate and allelic squared correlation attained by PedImpute ranged from 0.2 to 1.1% and from 96.6 to 99.3%, respectively. When complete genomic information on sire, dam, and maternal grandsire are available, as expected to be the case in the close future in (at least) dairy cattle, and considering accuracies obtained and computation time required, PedImpute represents a valuable choice in routine evaluations among the algorithms tested.
Scale effects on information content and complexity of streamflows
Technology Transfer Automated Retrieval System (TEKTRAN)
Understanding temporal and spatial variations of streamflows is important for flood forecasting, water resources management, and revealing interactions between hydrologic processes (e.g., precipitation, evapotranspiration, and soil water and groundwater flows.) The information theory has been used i...
Complexity and information flow analysis for multi-threaded programs
NASA Astrophysics Data System (ADS)
Ngo, Tri Minh; Huisman, Marieke
2017-01-01
This paper studies the security of multi-threaded programs. We combine two methods, i.e., qualitative and quantitative security analysis, to check whether a multi-threaded program is secure or not. In this paper, besides reviewing classical analysis models, we present a novel model of quantitative analysis where the attacker is able to select the scheduling policy. This model does not follow the traditional information-theoretic channel setting. Our analysis first studies what extra information an attacker can get if he knows the scheduler's choices, and then integrates this information into the transition system modeling the program execution. Via a case study, we compare this approach with the traditional information-theoretic models, and show that this approach gives more intuitive-matching results.
Phenylketonuria and Complex Spatial Visualization: An Analysis of Information Processing.
ERIC Educational Resources Information Center
Brunner, Robert L.; And Others
1987-01-01
The study of the ability of 16 early treated phenylketonuric (PKU) patients (ages 6-23 years) to solve complex spatial problems suggested that choice of problem-solving strategy, attention span, and accuracy of mental representation may be affected in PKU patients, despite efforts to maintain well-controlled phenylalanine concentrations in the…
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
2012-11-01
ICES REPORT 12-43 November 2012 Functional Entropy Variables: A New Methodology for Deriving Thermodynamically Consistent Algorithms for Complex...Gomez, John A. Evans, Thomas J.R. Hughes, and Chad M. Landis, Functional Entropy Variables: A New Methodology for Deriving Thermodynamically Consistent...2012 4. TITLE AND SUBTITLE Functional Entropy Variables: A New Methodology for Deriving Thermodynamically Consistent Algorithms for Complex Fluids
NASA Technical Reports Server (NTRS)
Knox, C. E.
1979-01-01
Navigation position estimates are based on range information form a randomly located DME and MLS back azimuth angular information. The MLS volmetric coverage checks are performed to ensure that proper navigation inputs are being utilized. These algorithms and volumetric checks were designed so that they could be added to most existing area navigation systems with minimum software modification.
2012-01-01
The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms. PMID:22413926
A new fast algorithm for computing a complex number: Theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
NASA Astrophysics Data System (ADS)
Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei
2016-04-01
Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.
Comments on "A robust fuzzy local information C-means clustering algorithm".
Celik, Turgay; Lee, Hwee Kuan
2013-03-01
In a recent paper, Krinidis and Chatzis proposed a variation of fuzzy c-means algorithm for image clustering. The local spatial and gray-level information are incorporated in a fuzzy way through an energy function. The local minimizers of the designed energy function to obtain the fuzzy membership of each pixel and cluster centers are proposed. In this paper, it is shown that the local minimizers of Krinidis and Chatzis to obtain the fuzzy membership and the cluster centers in an iterative manner are not exclusively solutions for true local minimizers of their designed energy function. Thus, the local minimizers of Krinidis and Chatzis do not converge to the correct local minima of the designed energy function not because of tackling to the local minima, but because of the design of energy function.
Algorithms for biomagnetic source imaging with prior anatomical and physiological information
Hughett, Paul William
1995-12-01
This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
ERIC Educational Resources Information Center
Puerta Melguizo, Mari Carmen; Vidya, Uti; van Oostendorp, Herre
2012-01-01
We studied the effects of menu type, navigation path complexity and spatial ability on information retrieval performance and web disorientation or lostness. Two innovative aspects were included: (a) navigation path relevance and (b) information gathering tasks. As expected we found that, when measuring aspects directly related to navigation…
Validating Information Complexity Questionnaires Using Travel Web Sites
2009-07-01
from Dallas, TX, to Yellowstone Natonal Park on partcular dates; 2) buy an arlne tcket for one person from Okla- homa Cty, OK, to Chcago, IL, on...22060; and the National Technical Information Service, Springfield, VA 22161 19. Security Classif. (of this report) 20. Security Classif. (of
Measures of Information Complexity and the Implications for Automation Design
2004-10-01
Miller, & Lane, 1998). Morçöl and Asche (1993) used this index to measure the creativity of persons in several social groups. The results indicated...Langton C (1991). Life at the edge of chaos. In: Langton C, Taylor C, Farmer J, Rasmussen S, eds. Proceed- ings Artifi cial Life II; Redwood City...California: Addison-Wesley; 41–91. McCabe (1976). A complexity measure. IEEE Transac- tions on Software Engineering; SE-2:308-20. Morçöl G, Asche M (1993
NASA Technical Reports Server (NTRS)
Wang, Lui; Valenzuela-Rendon, Manuel
1993-01-01
The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.
Xia, Peng; Shimozato, Yuki; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu
2013-01-01
We propose an image reconstruction algorithm for recovering high-frequency information in parallel phase-shifting digital holography. The proposed algorithm applies three kinds of interpolations and generates three different kinds of object waves. A Fourier transform is applied to each object wave, and the spatial-frequency domain is divided into 3×3 segments for each Fourier-transformed object wave. After that the segment in which interpolation error is the least among the segments having the same address of the segment in the spatial-frequency domain is extracted. The extracted segments are combined to generate an information-enhanced spatial-frequency spectrum of the object wave, and after that the formed spatial-frequency spectrum is inversely Fourier transformed. Then the high-frequency information of the reconstructed image is recovered. The effectiveness of the proposed algorithm was verified by a numerical simulation and an experiment.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
Algorithmic information content, Church-Turing thesis, physical entropy, and Maxwell's demon
Zurek, W.H.
1990-01-01
Measurements convert alternative possibilities of its potential outcomes into the definiteness of the record'' -- data describing the actual outcome. The resulting decrease of statistical entropy has been, since the inception of the Maxwell's demon, regarded as a threat to the second law of thermodynamics. For, when the statistical entropy is employed as the measure of the useful work which can be extracted from the system, its decrease by the information gathering actions of the observer would lead one to believe that, at least from the observer's viewpoint, the second law can be violated. I show that the decrease of ignorance does not necessarily lead to the lowering of disorder of the measured physical system. Measurements can only convert uncertainty (quantified by the statistical entropy) into randomness of the outcome (given by the algorithmic information content of the data). The ability to extract useful work is measured by physical entropy, which is equal to the sum of these two measures of disorder. So defined physical entropy is, on the average, constant in course of the measurements carried out by the observer on an equilibrium system. 27 refs., 6 figs.
Van Beurden, Eric K; Kia, Annie M; Zask, Avigdor; Dietrich, Uta; Rose, Lauren
2013-03-01
Health promotion addresses issues from the simple (with well-known cause/effect links) to the highly complex (webs and loops of cause/effect with unpredictable, emergent properties). Yet there is no conceptual framework within its theory base to help identify approaches appropriate to the level of complexity. The default approach favours reductionism--the assumption that reducing a system to its parts will inform whole system behaviour. Such an approach can yield useful knowledge, yet is inadequate where issues have multiple interacting causes, such as social determinants of health. To address complex issues, there is a need for a conceptual framework that helps choose action that is appropriate to context. This paper presents the Cynefin Framework, informed by complexity science--the study of Complex Adaptive Systems (CAS). It introduces key CAS concepts and reviews the emergence and implications of 'complex' approaches within health promotion. It explains the framework and its use with examples from contemporary practice, and sets it within the context of related bodies of health promotion theory. The Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies and avoid the pitfalls of applying reductionist approaches to complex situations. The urgency to address critical issues such as climate change and the social determinants of health calls for us to engage with complexity science. The Cynefin Framework helps practitioners make the shift, and enables those already engaged in complex approaches to communicate the value and meaning of their work in a system that privileges reductionist approaches.
Supramolecular chemistry: from molecular information towards self-organization and complex matter
NASA Astrophysics Data System (ADS)
Lehn, Jean-Marie
2004-03-01
Molecular chemistry has developed a wide range of very powerful procedures for constructing ever more sophisticated molecules from atoms linked by covalent bonds. Beyond molecular chemistry lies supramolecular chemistry, which aims at developing highly complex chemical systems from components interacting via non-covalent intermolecular forces. By the appropriate manipulation of these interactions, supramolecular chemistry became progressively the chemistry of molecular information, involving the storage of information at the molecular level, in the structural features, and its retrieval, transfer, and processing at the supramolecular level, through molecular recognition processes operating via specific interactional algorithms. This has paved the way towards apprehending chemistry also as an information science. Numerous receptors capable of recognizing, i.e. selectively binding, specific substrates have been developed, based on the molecular information stored in the interacting species. Suitably functionalized receptors may perform supramolecular catalysis and selective transport processes. In combination with polymolecular organization, recognition opens ways towards the design of molecular and supramolecular devices based on functional (photoactive, electroactive, ionoactive, etc) components. A step beyond preorganization consists in the design of systems undergoing self-organization, i.e. systems capable of spontaneously generating well-defined supramolecular architectures by self-assembly from their components. Self-organization processes, directed by the molecular information stored in the components and read out at the supramolecular level through specific interactions, represent the operation of programmed chemical systems. They have been implemented for the generation of a variety of discrete functional architectures of either organic or inorganic nature. Self-organization processes also give access to advanced supramolecular materials, such as
Abduallah, Yasser; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool. PMID:28243601
Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
Maximum likelihood: Extracting unbiased information from complex networks
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego; Loffredo, Maria I.
2008-07-01
The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.
Clark, G A
2004-06-08
In general, the Phase Retrieval from Modulus problem is very difficult. In this report, we solve the difficult, but somewhat more tractable case in which we constrain the solution to a minimum phase reconstruction. We exploit the real-and imaginary part sufficiency properties of the Fourier and Hilbert Transforms of causal sequences to develop an algorithm for reconstructing spectral phase given only spectral modulus. The algorithm uses homeomorphic signal processing methods with the complex cepstrum. The formal problem of interest is: Given measurements of only the modulus {vert_bar}H(k){vert_bar} (no phase) of the Discrete Fourier Transform (DFT) of a real, finite-length, stable, causal time domain signal h(n), compute a minimum phase reconstruction {cflx h}(n) of the signal. Then compute the phase of {cflx h}(n) using a DFT, and exploit the result as an estimate of the phase of h(n). The development of the algorithm is quite involved, but the final algorithm and its implementation are very simple. This work was motivated by a Phase Retrieval from Modulus Problem that arose in LLNL Defense Sciences Engineering Division (DSED) projects in lightning protection for buildings. The measurements are limited to modulus-only spectra from a spectrum analyzer. However, it is desired to perform system identification on the building to compute impulse responses and transfer functions that describe the amount of lightning energy that will be transferred from the outside of the building to the inside. This calculation requires knowledge of the entire signals (both modulus and phase). The algorithm and software described in this report are proposed as an approach to phase retrieval that can be used for programmatic needs. This report presents a brief tutorial description of the mathematical problem and the derivation of the phase retrieval algorithm. The efficacy of the theory is demonstrated using simulated signals that meet the assumptions of the algorithm. We see that for
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies. PMID:28117424
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
A modular low-complexity ECG delineation algorithm for real-time embedded systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2017-02-17
This work presents a new modular and lowcomplexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform realtime delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in run time to a wide range of modes and sampling rates, from a Ultra-low power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete High-accuracy delineation mode in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography (CSE) committee in the High-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultra-low-power 8-MHz TI MSP430 series microcontroller ranges from 0.2 to 8.5% according to the mode used.
High-order algorithms for compressible reacting flow with complex chemistry
NASA Astrophysics Data System (ADS)
Emmett, Matthew; Zhang, Weiqun; Bell, John B.
2014-05-01
In this paper we describe a numerical algorithm for integrating the multicomponent, reacting, compressible Navier-Stokes equations, targeted for direct numerical simulation of combustion phenomena. The algorithm addresses two shortcomings of previous methods. First, it incorporates an eighth-order narrow stencil approximation of diffusive terms that reduces the communication compared to existing methods and removes the need to use a filtering algorithm to remove Nyquist frequency oscillations that are not damped with traditional approaches. The methodology also incorporates a multirate temporal integration strategy that provides an efficient mechanism for treating chemical mechanisms that are stiff relative to fluid dynamical time-scales. The overall methodology is eighth order in space with options for fourth order to eighth order in time. The implementation uses a hybrid programming model designed for effective utilisation of many-core architectures. We present numerical results demonstrating the convergence properties of the algorithm with realistic chemical kinetics and illustrating its performance characteristics. We also present a validation example showing that the algorithm matches detailed results obtained with an established low Mach number solver.
2012-09-13
38 2.4 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5 Transportation Mode Selection...allowing the decision maker to tradeoff increases in the value obtained versus the number of arcs used. 9. Computational complexity proofs for the MASP... computational complexity , and transportation mode selection. Chapter 3 is a tutorial on Value Focused Thinking for Supply Chain Applications
NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Scanlon, C.
1984-01-01
Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng
2015-10-01
The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1987-01-01
The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
A geometry-based adaptive unstructured grid generation algorithm for complex geological media
NASA Astrophysics Data System (ADS)
Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh
2014-07-01
In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.
An efficient finite-element algorithm for 3D layered complex structure modelling.
Sahalos, J N; Kyriacou, G A; Vafiadis, E
1994-05-01
In this paper an efficient finite-element method (FEM) algorithm for complicated three-dimensional (3D) layered type models has been developed. Its unique feature is that it can handle, with memory requirements within the abilities of a simple PC, arbitrarily shaped 3D elements. This task is achieved by storing only the non-zero coefficients of the sparse FEM system of equations. The algorithm is applied to the solution of the Laplace equation in models with up to 79 layers of trilinear general hexahedron elements. The system of equations is solved with the Gauss-Seidel iterative technique.
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
The Influence of Information Acquisition on the Complex Dynamics of Market Competition
NASA Astrophysics Data System (ADS)
Guo, Zhanbing; Ma, Junhai
In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-10-05
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023
Sim, Kwang Mong; Guo, Yuanyuan; Shi, Benyun
2009-02-01
Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt BL to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.
Enhancing a diffusion algorithm for 4D image segmentation using local information
NASA Astrophysics Data System (ADS)
Lösel, Philipp; Heuveline, Vincent
2016-03-01
Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.
2004-12-01
employ genetic algorithms . In principle, calculation of the free energy change upon binding of two proteins should allow determination of the... Genetic Algorithm Approach to Protein Docking in CAPRI round 1. Proteins 52: 10-14. Glaser, F., Pupko, T., Paz, I., Bell, R.E., Bechor-Shental, D... genetic algorithm and an empirical binding free energy function. Journal of Computational Chemistry 19: 1639-1662. Palma, P.N., Krippahl, L., Wampler
ERIC Educational Resources Information Center
Losee, Robert M.
1996-01-01
The grammars of natural languages may be learned by using genetic algorithm systems such as LUST (Linguistics Using Sexual Techniques) that reproduce and mutate grammatical rules and parts-of-speech tags. In document retrieval or filtering systems, applying tags to the list of terms representing a document provides additional information about…
Recchia, Gabriel; Jones, Michael N
2009-08-01
Computational models of lexical semantics, such as latent semantic analysis, can automatically generate semantic similarity measures between words from statistical redundancies in text. These measures are useful for experimental stimulus selection and for evaluating a model's cognitive plausibility as a mechanism that people might use to organize meaning in memory. Although humans are exposed to enormous quantities of speech, practical constraints limit the amount of data that many current computational models can learn from. We follow up on previous work evaluating a simple metric of pointwise mutual information. Controlling for confounds in previous work, we demonstrate that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. We also present a simple tool for building simple and scalable models from large corpora quickly and efficiently.
NASA Astrophysics Data System (ADS)
Crow, W. T.; Wagner, W.
2009-12-01
Applying basic data assimilation techniques to the evaluation of remote-sensing products can clarify the impact of sensor design issues on the value of retrievals for hydrologic applications. For instance, the impact of incidence angle on the accuracy of radar surface soil moisture retrievals is largely unknown due to discrepancies in theoretical backscatter models as well as limitations in the availability of sufficiently-extensive ground-based soil moisture observations for validation purposes. In this presentation we will describe and apply a data assimilation evaluation technique for scatterometer-based surface soil moisture retrievals that does not require ground-based soil moisture observations to examine the sensitivity of retrieval skill to variations in incidence angle. Past results with the approach have shown that it is capable of detecting relative variations in the correlation between anomalies in remotely-sensed surface soil moisture retrievals and ground-truth soil moisture measurements. Application of the evaluation approach to the TU-Wien WARP5.0 European Space Radar (ERS) soil moisture data set over two regional-scale (~1000 km) domains in the Southern United States indicates a relative reduction in anomaly correlation-based skill of between 20% and 30% when moving between the lowest (< 26 degrees) and highest ERS (> 50 degrees) incidence angle ranges. These changes in anomaly-based correlation provide a useful proxy for relative variations in the value of estimates for data assimilation applications and can therefore be used to inform the design of appropriate retrieval algorithms. For example, the observed sensitivity of correlation-based skill with incidence angle is in approximate agreement with soil moisture retrieval uncertainty predictions made using the WARP5.0 backscatter model. However, the coupling of a bare soil backscatter model with the so-called "vegetation water cloud" model is shown to generally over-estimate the impact of
NASA Astrophysics Data System (ADS)
Tóth, Gergely
2007-08-01
The projection of complex interactions onto simple distance-dependent or angle-dependent classical mechanical functions is a long-standing theoretical challenge in the field of computational sciences concerning biomolecules, colloids, aggregates and simple systems as well. The construction of an effective potential may be based on theoretical assumptions, on the application of fitting procedures on experimental data and on the simplification of complex molecular simulations. Recently, a force-matching method was elaborated to project the data of Car-Parrinello ab initio molecular dynamics simulations onto two-particle classical interactions (Izvekov et al 2004 J. Chem. Phys. 120 10896). We have developed a potential-matching algorithm as a practical analogue of this force-matching method. The algorithm requires a large number of configurations (particle positions) and a single value of the potential energy for each configuration. We show the details of the algorithm and the test calculations on simple systems. The test calculation on water showed an example in which a similar structure was obtained for qualitatively different pair interactions. The application of the algorithm on reverse Monte Carlo configurations was tried as well. We detected inconsistencies in a part of our calculations. We found that the coarse graining of potentials cannot be performed perfectly both for the structural and the thermodynamic data. For example, if one applies an inverse method with an input of the pair-correlation function, it provides energetics data for the configurations uniquely. These energetics data can be different from the desired ones obtained by all atom simulations, as occurred in the testing of our potential-matching method.
ERIC Educational Resources Information Center
Williamson, David J.
2011-01-01
The specific problem addressed in this study was the low success rate of information technology (IT) projects in the U.S. Due to the abstract nature and inherent complexity of software development, IT projects are among the most complex projects encountered. Most existing schools of project management theory are based on the rational systems…
NASA Technical Reports Server (NTRS)
Freedman, Ellis; Ryan, Robert; Pagnutti, Mary; Holekamp, Kara; Gasser, Gerald; Carver, David; Greer, Randy
2007-01-01
Spectral Dark Subtraction (SDS) provides good ground reflectance estimates across a variety of atmospheric conditions with no knowledge of those conditions. The algorithm may be sensitive to errors from stray light, calibration, and excessive haze/water vapor. SDS seems to provide better estimates than traditional algorithms using on-site atmospheric measurements much of the time.
NASA Astrophysics Data System (ADS)
You, Tao; Cheng, Hui-Min; Ning, Yi-Zi; Shia, Ben-Chang; Zhang, Zhong-Yuan
2016-12-01
Like clustering analysis, community detection aims at assigning nodes in a network into different communities. Fdp is a recently proposed density-based clustering algorithm which does not need the number of clusters as prior input and the result is insensitive to its parameter. However, Fdp cannot be directly applied to community detection due to its inability to recognize the community centers in the network. To solve the problem, a new community detection method (named IsoFdp) is proposed in this paper. First, we use IsoMap technique to map the network data into a low dimensional manifold which can reveal diverse pair-wised similarity. Then Fdp is applied to detect the communities in the network. An improved partition density function is proposed to select the proper number of communities automatically. We test our method on both synthetic and real-world networks, and the results demonstrate the effectiveness of our algorithm over the state-of-the-art methods.
Study of high speed complex number algorithms. [for determining antenna for field radiation patterns
NASA Technical Reports Server (NTRS)
Heisler, R.
1981-01-01
A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.
NASA Astrophysics Data System (ADS)
Teramae, Hiroyuki; Maruo, Yasuko Y.
2015-12-01
We try to optimize the structures of monoethanolamine (MEA), MEA dimer, MEA + two water molecules, and MEA dimer + four water molecules as the model of MEA in aqueous solutions using the Hamiltonian algorithm. We found the most stable MEA backbones are all gauche structures. The MEA in aqueous solution seems to exist as dimer or larger aggregates. As the base, the water molecule would be more important than another MEA because of the hydrogen bond networks.
2014-12-01
provide the expected accuracy improvement, and a new method is sought. This thesis reformulates the equations used in computing the CAF, in order to...accuracy improvement, and a new method is sought. This thesis reformulates the equations used in computing the CAF, in order to ac- count for the...noticeable. This work reformulates the CAF equations used in the CAFMAP algorithm, such that the collector-emitter geometry is updated with each successive
Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2008-01-01
We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.
Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A
2015-01-01
Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process.
NASA Astrophysics Data System (ADS)
Aurin, Dirk Alexander
2011-12-01
The optical properties of the sea determine how light penetrates to depth, interacts with water-borne constituents, and re-emerges as scattered rays. By inversion, quantifying change in the spectral light field as it reflects from the sea unlocks information about the water's optical properties, which can then be used to quantify the suspended and dissolved biogeochemical constituents in the water. Retrieving bio-optical properties is relatively straightforward for the open ocean where phytoplankton-derived materials dominate ocean color. In contrast, the presence of land-derived material contributes significantly to the optical signature of nearshore waters, making the development of ocean color algorithms considerably more challenging. A hypothesis of this research is that characterization of the spectral nature of bio-optical properties in these optically complex waters facilitates optimization of semi-analytical algorithms for retrieving these properties. The main goal of this research is to develop an ocean color remote sensing algorithm for the highly turbid, estuarine waters of Long Island Sound (LIS) Bio-optical data collected in LIS showed it to be strongly influenced by the surrounding watershed and characterized by exceptionally high absorption associated with phytoplankton, non-algal particulate material, and chromophoric dissolved material compared to other coastal environments world-wide. Variability in the magnitudes of inherent optical properties, IOPs (e.g. absorption, scattering and attenuation coefficients), is explained by local influences such as major river outflows, as well as seasonal changes. Nevertheless, ocean color parameters describing the spectral shape of IOPs---parameters to which algorithms optimization is sensitive---are fairly constant across the region, possibly a result of the homogenizing influence of vigorous tidal and subtidal mixing or relative regional homogeneity in the biogeochemical nature of terrigenous material. Field
Atzei, A; Luchetti, R; Garagnani, L
2017-01-01
The classical definition of 'Palmer Type IB' triangular fibrocartilage complex tear, includes a spectrum of clinical conditions. This review highlights the clinical and arthroscopic criteria that enable us to categorize five classes on a treatment-oriented classification system of triangular fibrocartilage complex peripheral tears. Class 1 lesions represent isolated tears of the distal triangular fibrocartilage complex without distal radio-ulnar joint instability and are amenable to arthroscopic suture. Class 2 tears include rupture of both the distal triangular fibrocartilage complex and proximal attachments of the triangular fibrocartilage complex to the fovea. Class 3 tears constitute isolated ruptures of the proximal attachment of the triangular fibrocartilage complex to the fovea; they are not visible at radio-carpal arthroscopy. Both Class 2 and Class 3 tears are diagnosed with a positive hook test and are typically associated with distal radio-ulnar joint instability. If required, treatment is through reattachment of the distal radio-ulnar ligament insertions to the fovea. Class 4 lesions are irreparable tears due to the size of the defect or to poor tissue quality and, if required, treatment is through distal radio-ulnar ligament reconstruction with tendon graft. Class 5 tears are associated with distal radio-ulnar joint arthritis and can only be treated with salvage procedures. This subdivision of type IB triangular fibrocartilage complex tear provides more insights in the pathomechanics and treatment strategies.
Multicriteria Analysis: Managing Complexity in Selecting a Student-Information System.
ERIC Educational Resources Information Center
Blanchard, William; And Others
1989-01-01
The complexity of Seattle University's decision to replace three separate computerized student information systems with one integrated system was managed with a multicriteria method for evaluating alternatives. The method both managed a large amount of information and reduced people's resistance to change. (MSE)
Using measures of information content and complexity of time series as hydrologic metrics
Technology Transfer Automated Retrieval System (TEKTRAN)
The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...
Burns, Thomas; Rajan, Ramesh
2015-01-01
Many studies have noted significant differences among human electroencephalograph (EEG) results when participants or patients are exposed to different stimuli, undertaking different tasks, or being affected by conditions such as epilepsy or Alzheimer's disease. Such studies often use only one or two measures of complexity and do not regularly justify their choice of measure beyond the fact that it has been used in previous studies. If more measures were added to such studies, however, more complete information might be found about these reported differences. Such information might be useful in confirming the existence or extent of such differences, or in understanding their physiological bases. In this study we analysed publically-available EEG data using a range of complexity measures to determine how well the measures correlated with one another. The complexity measures did not all significantly correlate, suggesting that different measures were measuring unique features of the EEG signals and thus revealing information which other measures were unable to detect. Therefore, the results from this analysis suggests that combinations of complexity measures reveal unique information which is in addition to the information captured by other measures of complexity in EEG data. For this reason, researchers using individual complexity measures for EEG data should consider using combinations of measures to more completely account for any differences they observe and to ensure the robustness of any relationships identified.
Combining complexity measures of EEG data: multiplying measures reveal previously hidden information
Burns, Thomas; Rajan, Ramesh
2015-01-01
Many studies have noted significant differences among human electroencephalograph (EEG) results when participants or patients are exposed to different stimuli, undertaking different tasks, or being affected by conditions such as epilepsy or Alzheimer's disease. Such studies often use only one or two measures of complexity and do not regularly justify their choice of measure beyond the fact that it has been used in previous studies. If more measures were added to such studies, however, more complete information might be found about these reported differences. Such information might be useful in confirming the existence or extent of such differences, or in understanding their physiological bases. In this study we analysed publically-available EEG data using a range of complexity measures to determine how well the measures correlated with one another. The complexity measures did not all significantly correlate, suggesting that different measures were measuring unique features of the EEG signals and thus revealing information which other measures were unable to detect. Therefore, the results from this analysis suggests that combinations of complexity measures reveal unique information which is in addition to the information captured by other measures of complexity in EEG data. For this reason, researchers using individual complexity measures for EEG data should consider using combinations of measures to more completely account for any differences they observe and to ensure the robustness of any relationships identified. PMID:26594331
Spencer, W.A.; Goode, S.R.
1997-10-01
ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and
Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro
2015-01-01
Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.
NASA Technical Reports Server (NTRS)
Hoang, TY
1994-01-01
A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).
NASA Astrophysics Data System (ADS)
Wielgosz, A.; Brzeziński, A.; Böhm, S.
2016-12-01
The complex demodulation (CD) algorithm is an efficient tool for extracting the diurnal and subdiurnal components of Earth rotation from the routine VLBI observations (Brzeziński, 2012). This algorithm was implemented by Böhm et al (2012b) into a dedicated version of the VLBI analysis software VieVs. The authors processed around 3700 geodetic 24-hour observing sessions in 1984.0-2010.5 and estimated simultaneously the time series of the long period components as well as diurnal, semidiurnal, terdiurnal and quarterdiurnal components of polar motion (PM) and universal time UT1. This paper describes the tests of the CD algorithm by checking consistency of the low frequency components of PM and UT1 estimated by VieVS CD and those from the IERS and IVS combined solutions. Moreover, the retrograde diurnal component of PM demodulated from VLBI observations has been compared to the celestial pole offsets series included in the IERS and IVS solutions. We found for all three components a good agreement of the results based on the CD approach and those based on the standard parameterization recommended by the IERS Conventions (IERS, 2010) and applied by the IERS and IVS. We conclude that an application of the CD parameterization in VLBI data analysis does not change those components of EOP which are included in the standard adjustment, while enabling simultaneous estimation of the high frequency components from the routine VLBI observations. Moreover, we deem that the CD algorithm can also be implemented in analysis of other space geodetic observations, like GNSS or SLR, enabling retrieval of subdiurnal signals in EOP from the past data.
NASA Astrophysics Data System (ADS)
Kikuchi, N.; Yoshida, Y.; Uchino, O.; Morino, I.; Yokota, T.
2016-11-01
We present an algorithm for retrieving column-averaged dry air mole fraction of carbon dioxide (XCO2) and methane (XCH4) from reflected spectra in the shortwave infrared (SWIR) measured by the TANSO-FTS (Thermal And Near infrared Sensor for carbon Observation Fourier Transform Spectrometer) sensor on board the Greenhouse gases Observing SATellite (GOSAT). The algorithm uses the two linear polarizations observed by TANSO-FTS to improve corrections to the interference effects of atmospheric aerosols, which degrade the accuracy in the retrieved greenhouse gas concentrations. To account for polarization by the land surface reflection in the forward model, we introduced a bidirectional reflection matrix model that has two parameters to be retrieved simultaneously with other state parameters. The accuracy in XCO2 and XCH4 values retrieved with the algorithm was evaluated by using simulated retrievals over both land and ocean, focusing on the capability of the algorithm to correct imperfect prior knowledge of aerosols. To do this, we first generated simulated TANSO-FTS spectra using a global distribution of aerosols computed by the aerosol transport model SPRINTARS. Then the simulated spectra were submitted to the algorithms as measurements both with and without polarization information, adopting a priori profiles of aerosols that differ from the true profiles. We found that the accuracy of XCO2 and XCH4, as well as profiles of aerosols, retrieved with polarization information was considerably improved over values retrieved without polarization information, for simulated observations over land with aerosol optical thickness greater than 0.1 at 1.6 μm.
1983-06-01
more likely under one hypothesis than the other) ( Johnson , Cavanagh, Spooner, & Samet, 1973). These two should multiply to derive the total information...bias since the valence of each cue should be insensitive to the relative contributions of reliability and diagnosticity ( Johnson , Cavanagh, Spooner...1970, 73, 422-432. Johnson , E. M.," Cavanagh, C., Spooner, R. L., & Samet, M. G. Utilization of reliability measurements in Bayesian inference: models
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Balance between Noise and Information Flow Maximizes Set Complexity of Network Dynamics
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
NASA Astrophysics Data System (ADS)
Gao, Min; Huang, Shutao; Zhong, Xia
2009-09-01
The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.
NASA Astrophysics Data System (ADS)
Gao, Min; Huang, Shutao; Zhong, Xia
2010-11-01
The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
Defining and Detecting Complex Peak Relationships in Mass Spectral Data: The Mz.unity Algorithm.
Mahieu, Nathaniel G; Spalding, Jonathan L; Gelman, Susan J; Patti, Gary J
2016-09-20
Analysis of a single analyte by mass spectrometry can result in the detection of more than 100 degenerate peaks. These degenerate peaks complicate spectral interpretation and are challenging to annotate. In mass spectrometry-based metabolomics, this degeneracy leads to inflated false discovery rates, data sets containing an order of magnitude more features than analytes, and an inefficient use of resources during data analysis. Although software has been introduced to annotate spectral degeneracy, current approaches are unable to represent several important classes of peak relationships. These include heterodimers and higher complex adducts, distal fragments, relationships between peaks in different polarities, and complex adducts between features and background peaks. Here we outline sources of peak degeneracy in mass spectra that are not annotated by current approaches and introduce a software package called mz.unity to detect these relationships in accurate mass data. Using mz.unity, we find that data sets contain many more complex relationships than we anticipated. Examples include the adduct of glutamate and nicotinamide adenine dinucleotide (NAD), fragments of NAD detected in the same or opposite polarities, and the adduct of glutamate and a background peak. Further, the complex relationships we identify show that several assumptions commonly made when interpreting mass spectral degeneracy do not hold in general. These contributions provide new tools and insight to aid in the annotation of complex spectral relationships and provide a foundation for improved data set identification. Mz.unity is an R package and is freely available at https://github.com/nathaniel-mahieu/mz.unity as well as our laboratory Web site http://pattilab.wustl.edu/software/ .
NASA Astrophysics Data System (ADS)
Johar, F. M.; Azmin, F. A.; Shibghatullah, A. S.; Suaidi, M. K.; Ahmad, B. H.; Abd Aziz, M. Z. A.; Salleh, S. N.; Shukor, M. Md
2014-04-01
Attenuation of GSM, GPS and personal communication signal leads to poor communication inside the building using regular shapes of energy saving glass coating. Thus, the transmission is very low. A brand new type of band pass frequency selective surface (FSS) for energy saving glass application is presented in this paper for one unit cell. Numerical Periodic Method of Moment approach according to a previous study has been applied to determine the new optimum design of one unit cell energy saving glass coating structure. Optimization technique based on the Genetic Algorithm (GA) is used to obtain an improved in return loss and transmission signal. The unit cell of FSS is designed and simulated using the CST Microwave Studio software at based on industrial, scientific and medical bands (ISM). A unique and irregular shape of an energy saving glass coating structure is obtained with lower return loss and improved transmission coefficient.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Martín H, José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Complexity Analysis and Algorithms for Optimal Resource Allocation in Wireless Networks
2012-09-01
independent orthogonal signaling such as OFDM . The general formulation will exploit the concept of ‘interference alignment’ which is known to provide...substantial rate gain over OFDM signalling for general interference channels. We have successfully analyzed the complexity to characterize the optimal...DSM problem formulation which allows correlated signaling rather than being restricted to the conventional independent orthogonal signaling such as OFDM
Comparison of CPU and GPU based coding on low-complexity algorithms for display signals
NASA Astrophysics Data System (ADS)
Richter, Thomas; Simon, Sven
2013-09-01
Graphics Processing Units (GPUs) are freely programmable massively parallel general purpose processing units and thus offer the opportunity to off-load heavy computations from the CPU to the GPU. One application for GPU programming is image compression, where the massively parallel nature of GPUs promises high speed benefits. This article analyzes the predicaments of data-parallel image coding on the example of two high-throughput coding algorithms. The codecs discussed here were designed to answer a call from the Video Electronics Standards Association (VESA), and require only minimal buffering at encoder and decoder side while avoiding any pixel-based feedback loops limiting the operating frequency of hardware implementations. Comparing CPU and GPU implementations of the codes show that GPU based codes are usually not considerably faster, or perform only with less than ideal rate-distortion performance. Analyzing the details of this result provides theoretical evidence that for any coding engine either parts of the entropy coding and bit-stream build-up must remain serial, or rate-distortion penalties must be paid when offloading all computations on the GPU.
The Additive and Logical Complexities of Linear and Bilinear Arithmetic Algorithms,
1981-06-01
of vectors, CV, cyclic convolution of vectors, CCV, and matrix multiplication, MM, are among the central problems of algebraic complexity theory , cf...a basi, set of vertices of D. (Lo(D) could be the set . of all vertices of D of indegree 0 but for D = D(A) we always choose Lo(D) as the set of...the directions of edges of D(A) and choose the set of input-variables of A as the basic set Lo(D) of the new digraph, D(A). Then our construction
Link prediction in complex networks based on an information allocation index
NASA Astrophysics Data System (ADS)
Pei, Panpan; Liu, Bo; Jiao, Licheng
2017-03-01
An important issue in link prediction of complex networks is to make full use of different kinds of available information simultaneously. To tackle this issue, recently, an information-theoretic model has been proposed and a novel Neighbor Set Information Index (NSI) has been designed. Motivated by this work, we proposed a more general information-theoretic model by further distinguishing the contributions from different variables of the available features. Then, by introducing the resource allocation process into the model, we designed a new index based on neighbor sets with a virtual information allocation process: Neighbor Set Information Allocation Index(NSIA). Experimental studies on real world networks from disparate fields indicate that NSIA performs well compared with NSI as well as other typical proximity indices.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-10-01
We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.
NASA Astrophysics Data System (ADS)
Dao, P.; Heinrich-Josties, E.; Boroson, T.
2016-09-01
Automated detection of changes of GEO satellites using photometry is fundamentally dependent on near real time association of non-resolved signatures and object identification. Non-statistical algorithms which rely on fixed positional boundaries for associating objects often results in mistags [1]. Photometry has been proposed to reduce the occurrence of mistags. In past attempts to include photometry, (1) the problem of correlation (with the catalog) has been decoupled from the photometry-based detection of change and mistagging and (2) positional information has not been considered simultaneously with photometry. The technique used in this study addresses both problems. It takes advantage of the fusion of both types of information and processes all information concurrently in a single statistics-based framework. This study demonstrates with Las Cumbres Observatory Global Telescope Network (LCOGT) data that metric information, i.e. right ascension, declination, photometry and GP element set, can be used concurrently to confidently associate (identify) GEO objects. All algorithms can easily be put into a framework to process data in near-real-time.
ERIC Educational Resources Information Center
Blanchard, William; And Others
Seattle University recently decided to replace three separate, computerized student-information systems with a single, integrated system. The complexity of this decision was managed with a multicriteria method that was used to evaluate alternative systems. The method took into account the many and sometimes conflicting concerns of the people who…
Linguistic Complexity and Information Structure in Korean: Evidence from Eye-Tracking during Reading
ERIC Educational Resources Information Center
Lee, Yoonhyoung; Lee, Hanjung; Gordon, Peter C.
2007-01-01
The nature of the memory processes that support language comprehension and the manner in which information packaging influences online sentence processing were investigated in three experiments that used eye-tracking during reading to measure the ease of understanding complex sentences in Korean. All three experiments examined reading of embedded…
ERIC Educational Resources Information Center
Williams, Diane L.; Minshew, Nancy J.; Goldstein, Gerald
2015-01-01
More than 20?years ago, Minshew and colleagues proposed the Complex Information Processing model of autism in which the impairment is characterized as a generalized deficit involving multiple modalities and cognitive domains that depend on distributed cortical systems responsible for higher order abilities. Subsequent behavioral work revealed a…
ERIC Educational Resources Information Center
Tomasino, Arthur P.
2013-01-01
In spite of the best efforts of researchers and practitioners, Information Systems (IS) developers are having problems "getting it right". IS developments are challenged by the emergence of unanticipated IS characteristics undermining managers ability to predict and manage IS change. Because IS are complex, development formulas, best…
ERIC Educational Resources Information Center
Booker, Queen Esther
2009-01-01
An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systems containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N^{3}) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri
2016-08-22
In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data.
Molecular dynamics of protein kinase-inhibitor complexes: a valid structural information.
Caballero, Julio; Alzate-Morales, Jans H
2012-01-01
Protein kinases (PKs) are key components of protein phosphorylation based signaling networks in eukaryotic cells. They have been identified as being implicated in many diseases. High-resolution X-ray crystallographic data exist for many PKs and, in many cases, these structures are co-complexed with inhibitors. Although this valuable information confirms the precise structure of PKs and their complexes, it ignores the dynamic movements of the structures which are relevant to explain the affinities and selectivity of the ligands, to characterize the thermodynamics of the solvated complexes, and to derive predictive models. Atomistic molecular dynamics (MD) simulations present a convenient way to study PK-inhibitor complexes and have been increasingly used in recent years in structure-based drug design. MD is a very useful computational method and a great counterpart for experimentalists, which helps them to derive important additional molecular information. That enables them to follow and understand structure and dynamics of protein-ligand systems with extreme molecular detail on scales where motion of individual atoms can be tracked. MD can be used to sample dynamic molecular processes, and can be complemented with more advanced computational methods (e.g., free energy calculations, structure-activity relationship analysis). This review focuses on the most commonly applications to study PK-inhibitor complexes using MD simulations. Our aim is that researchers working in the design of PK inhibitors be aware of the benefits of this powerful tool in the design of potent and selective PK inhibitors.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan
2016-01-01
Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic
NASA Astrophysics Data System (ADS)
Luo, Xi-Liu; Wang, Jiang; Han, Chun-Xiao; Deng, Bin; Wei, Xi-Le; Bian, Hong-Rui
2012-02-01
As a convenient approach to the characterization of cerebral cortex electrical information, electroencephalograph (EEG) has potential clinical application in monitoring the acupuncture effects. In this paper, a method composed of the mutual information method and Lempel—Ziv complexity method (MILZC) is proposed to investigate the effects of acupuncture on the complexity of information exchanges between different brain regions based on EEGs. In the experiments, eight subjects are manually acupunctured at ‘Zusanli’ acupuncture point (ST-36) with different frequencies (i.e., 50, 100, 150, and 200 times/min) and the EEGs are recorded simultaneously. First, MILZC values are compared in general. Then average brain connections are used to quantify the effectiveness of acupuncture under the above four frequencies. Finally, significance index P values are used to study the spatiality of the acupuncture effect on the brain. Three main findings are obtained: (i) MILZC values increase during the acupuncture; (ii) manual acupunctures (MAs) with 100 times/min and 150 times/min are more effective than with 50 times/min and 200 times/min; (iii) contralateral hemisphere activation is more prominent than ipsilateral hemisphere's. All these findings suggest that acupuncture contributes to the increase of brain information exchange complexity and the MILZC method can successfully describe these changes.
NASA Astrophysics Data System (ADS)
Javaheri Javid, Mohammad Ali; Blackwell, Tim; Zimmer, Robert; Majid al-Rifaie, Mohammad
2016-04-01
Shannon entropy fails to discriminate structurally different patterns in two-dimensional images. We have adapted information gain measure and Kolmogorov complexity to overcome the shortcomings of entropy as a measure of image structure. The measures are customised to robustly quantify the complexity of images resulting from multi-state cellular automata (CA). Experiments with a two-dimensional multi-state cellular automaton demonstrate that these measures are able to predict some of the structural characteristics, symmetry and orientation of CA generated patterns.
Computer/information security design approaches for Complex 21/Reconfiguration facilities
Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.
1993-08-01
Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.
2013-01-01
Background Adequate health literacy is important for people to maintain good health and manage diseases and injuries. Educational text, either retrieved from the Internet or provided by a doctor’s office, is a popular method to communicate health-related information. Unfortunately, it is difficult to write text that is easy to understand, and existing approaches, mostly the application of readability formulas, have not convincingly been shown to reduce the difficulty of text. Objective To develop an evidence-based writer support tool to improve perceived and actual text difficulty. To this end, we are developing and testing algorithms that automatically identify difficult sections in text and provide appropriate, easier alternatives; algorithms that effectively reduce text difficulty will be included in the support tool. This work describes the user evaluation with an independent writer of an automated simplification algorithm using term familiarity. Methods Term familiarity indicates how easy words are for readers and is estimated using term frequencies in the Google Web Corpus. Unfamiliar words are algorithmically identified and tagged for potential replacement. Easier alternatives consisting of synonyms, hypernyms, definitions, and semantic types are extracted from WordNet, the Unified Medical Language System (UMLS), and Wiktionary and ranked for a writer to choose from to simplify the text. We conducted a controlled user study with a representative writer who used our simplification algorithm to simplify texts. We tested the impact with representative consumers. The key independent variable of our study is lexical simplification, and we measured its effect on both perceived and actual text difficulty. Participants were recruited from Amazon’s Mechanical Turk website. Perceived difficulty was measured with 1 metric, a 5-point Likert scale. Actual difficulty was measured with 3 metrics: 5 multiple-choice questions alongside each text to measure understanding
Esquivel, Rodolfo O; Angulo, Juan Carlos; Antolín, Juan; Dehesa, Jesús S; López-Rosa, Sheila; Flores-Gallegos, Nelson
2010-07-14
The Fisher-Shannon and LMC shape complexities and the Shannon-disequilibrium, Fisher-Shannon and Fisher-disequilibrium information planes, which consist of two localization-delocalization factors, are computed in both position and momentum spaces for the one-particle densities of 90 selected molecules of various chemical types, at the CISD/6-311++G(3df,2p) level of theory. We found that while the two measures of complexity show general trends only, the localization-delocalization planes clearly exhibit chemically significant patterns. Several molecular properties (energy, ionization potential, total dipole moment, hardness, electrophilicity) are analyzed and used to interpret and understand the chemical nature of the composite information-theoretic measures above mentioned. Our results show that these measures detect not only randomness or localization but also pattern and organization.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other new attractions are an information center, a walk- through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
2010-01-01
Background The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed. Description Here we present a text mining algorithm for the extraction of kinetic information such as KM, Ki, kcat etc. as well as associated information such as enzyme names, EC numbers, ligands, organisms, localisations, pH and temperatures. Using this rule- and dictionary-based approach, it was possible to extract 514,394 kinetic parameters of 13 categories (KM, Ki, kcat, kcat/KM, Vmax, IC50, S0.5, Kd, Ka, t1/2, pI, nH, specific activity, Vmax/KM) from about 17 million PubMed abstracts and combine them with other data in the abstract. A manual verification of approx. 1,000 randomly chosen results yielded a recall between 51% and 84% and a precision ranging from 55% to 96%, depending of the category searched. The results were stored in a database and are available as "KID the KInetic Database" via the internet. Conclusions The presented algorithm delivers a considerable amount of information and therefore may aid to accelerate the research and the automated analysis required for today's systems biology approaches. The database obtained by analysing PubMed abstracts may be a valuable help in the field of chemical and biological kinetics. It is completely based upon text mining and therefore complements manually curated databases. The database is available at http://kid.tu-bs.de. The source code of the algorithm is provided under the GNU General Public Licence and available on
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742
NASA Astrophysics Data System (ADS)
Siregar, B.; Gunawan, D.; Andayani, U.; Sari Lubis, Elita; Fahmi, F.
2017-01-01
Food delivery system is one kind of geographical information systems (GIS) that can be applied through digitation process. The main case in food delivery system is the way to determine the shortest path and food delivery vehicle movement tracking. Therefore, to make sure that the digitation process of food delivery system can be applied efficiently, it is needed to add shortest path determination facility and food delivery vehicle tracking. This research uses A Star (A*) algorithm for determining shortest path and location-based system (LBS) programming for moving food delivery vehicle object tracking. According to this research, it is generated the integrated system that can be used by food delivery driver, customer, and administrator in terms of simplifying the food delivery system. Through the application of shortest path and the tracking of moving vehicle, thus the application of food delivery system in the scope of geographical information system (GIS) can be executed.
A novel seizure detection algorithm informed by hidden Markov model event states
NASA Astrophysics Data System (ADS)
Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian
2016-06-01
Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.
Rastogi, Ravi; Pawluk, Dianne T V
2013-01-01
An increasing amount of information content used in school, work, and everyday living is presented in graphical form. Unfortunately, it is difficult for people who are blind or visually impaired to access this information, especially when many diagrams are needed. One problem is that details, even in relatively simple visual diagrams, can be very difficult to perceive using touch. With manually created tactile diagrams, these details are often presented in separate diagrams which must be selected from among others. Being able to actively zoom in on an area of a single diagram so that the details can be presented at a reasonable size for exploration purposes seems a simpler approach for the user. However, directly using visual zooming methods have some limitations when used haptically. Therefore, a new zooming method is proposed to avoid these pitfalls. A preliminary experiment was performed to examine the usefulness of the algorithm compared to not using zooming. The results showed that the number of correct responses improved with the developed zooming algorithm and participants found it to be more usable than not using zooming for exploration of a floor map.
Musical beauty and information compression: Complex to the ear but simple to the mind?
2011-01-01
Background The biological origin of music, its universal appeal across human cultures and the cause of its beauty remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is not? Possible answers to these questions will be framed in the context of Information Theory. Presentation of the Hypothesis The entire life-long sensory data stream of a human is enormous. The adaptive solution to this problem of scale is information compression, thought to have evolved to better handle, interpret and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some point in evolution, the practice of successful information compression became linked to the physiological reward system. I hypothesise that the establishment of this "compression and pleasure" connection paved the way for musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio compression had become intrinsically pleasurable in its own right. Testing the Hypothesis For a range of compositions, empirically determine the relationship between the listener's pleasure and "lossless" audio compression. I hypothesise that enduring musical masterpieces will possess an interesting objective property: despite apparent complexity, they will also exhibit high compressibility. Implications of the Hypothesis Artistic masterpieces and deep Scientific insights share the common process of data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical geniuses are skilled in composing music that appears highly complex to
Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems
Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu
2009-01-01
In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)-characterizing the degree of rarefaction-becomes small. In contrast, the Fokker-Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker-Planck collision operator-instead of performing binary collisions employed by the DSMC method-integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisional scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.
Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin
2009-10-01
In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.
Improving model fidelity and sensitivity for complex systems through empirical information theory
Majda, Andrew J.; Gershgorin, Boris
2011-01-01
In many situations in contemporary science and engineering, the analysis and prediction of crucial phenomena occur often through complex dynamical equations that have significant model errors compared with the true signal in nature. Here, a systematic information theoretic framework is developed to improve model fidelity and sensitivity for complex systems including perturbation formulas and multimodel ensembles that can be utilized to improve both aspects of model error simultaneously. A suite of unambiguous test models is utilized to demonstrate facets of the proposed framework. These results include simple examples of imperfect models with perfect equilibrium statistical fidelity where there are intrinsic natural barriers to improving imperfect model sensitivity. Linear stochastic models with multiple spatiotemporal scales are utilized to demonstrate this information theoretic approach to equilibrium sensitivity, the role of increasing spatial resolution in the information metric for model error, and the ability of imperfect models to capture the true sensitivity. Finally, an instructive statistically nonlinear model with many degrees of freedom, mimicking the observed non-Gaussian statistical behavior of tracers in the atmosphere, with corresponding imperfect eddy-diffusivity parameterization models are utilized here. They demonstrate the important role of additional stochastic forcing of imperfect models in order to systematically improve the information theoretic measures of fidelity and sensitivity developed here. PMID:21646534
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
On Using Genetic Algorithms for Multimodal Relevance Optimization in Information Retrieval.
ERIC Educational Resources Information Center
Boughanem, M.; Christment, C.; Tamine, L.
2002-01-01
Presents a genetic relevance optimization process performed in an information retrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…
Spatial and Social Diffusion of Information and Influence: Models and Algorithms
ERIC Educational Resources Information Center
Doo, Myungcheol
2012-01-01
In this dissertation research, we argue that spatial alarms and activity-based social networks are two fundamentally new types of information and influence diffusion channels. Such new channels have the potential of enriching our professional experiences and our personal life quality in many unprecedented ways. First, we develop an activity driven…
Abstracting meaning from complex information (gist reasoning) in adult traumatic brain injury.
Vas, Asha Kuppachi; Spence, Jeffrey; Chapman, Sandra Bond
2015-01-01
Gist reasoning (abstracting meaning from complex information) was compared between adults with moderate-to-severe traumatic brain injury (TBI, n = 30) at least one year post injury and healthy adults (n = 40). The study also examined the contribution of executive functions (working memory, inhibition, and switching) and memory (immediate recall and memory for facts) to gist reasoning. The correspondence between gist reasoning and daily function was also examined in the TBI group. Results indicated that the TBI group performed significantly lower than the control group on gist reasoning, even after adjusting for executive functions and memory. Executive function composite was positively associated with gist reasoning (p < .001). Additionally, performance on gist reasoning significantly predicted daily function in the TBI group beyond the predictive ability of executive function alone (p = .011). Synthesizing and abstracting meaning(s) from information (i.e., gist reasoning) could provide an informative index into higher order cognition and daily functionality.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Tutwiler, M. Shane
2014-01-01
This article considers a set of well-researched default assumptions that people make in reasoning about complex causality and argues that, in part, they result from the forms of causal induction that we engage in and the type of information available in complex environments. It considers how information often falls outside our attentional frame…
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
Pareschi, Fabio; Albertini, Pierluigi; Frattini, Giovanni; Mangia, Mauro; Rovatti, Riccardo; Setti, Gianluca
2016-02-01
We report the design and implementation of an Analog-to-Information Converter (AIC) based on Compressed Sensing (CS). The system is realized in a CMOS 180 nm technology and targets the acquisition of bio-signals with Nyquist frequency up to 100 kHz. To maximize performance and reduce hardware complexity, we co-design hardware together with acquisition and reconstruction algorithms. The resulting AIC outperforms previously proposed solutions mainly thanks to two key features. First, we adopt a novel method to deal with saturations in the computation of CS measurements. This allows no loss in performance even when 60% of measurements saturate. Second, the system is able to adapt itself to the energy distribution of the input by exploiting the so-called rakeness to maximize the amount of information contained in the measurements. With this approach, the 16 measurement channels integrated into a single device are expected to allow the acquisition and the correct reconstruction of most biomedical signals. As a case study, measurements on real electrocardiograms (ECGs) and electromyograms (EMGs) show signals that these can be reconstructed without any noticeable degradation with a compression rate, respectively, of 8 and 10.
Cameron, Delroy; Sheth, Amit P.; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A.
2015-01-01
While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and “intelligible constructs” not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving
Schumacher, Johannes; Wunderle, Thomas; Fries, Pascal; Jäkel, Frank; Pipa, Gordon
2015-08-01
In neuroscience, data are typically generated from neural network activity. The resulting time series represent measurements from spatially distributed subsystems with complex interactions, weakly coupled to a high-dimensional global system. We present a statistical framework to estimate the direction of information flow and its delay in measurements from systems of this type. Informed by differential topology, gaussian process regression is employed to reconstruct measurements of putative driving systems from measurements of the driven systems. These reconstructions serve to estimate the delay of the interaction by means of an analytical criterion developed for this purpose. The model accounts for a range of possible sources of uncertainty, including temporally evolving intrinsic noise, while assuming complex nonlinear dependencies. Furthermore, we show that if information flow is delayed, this approach also allows for inference in strong coupling scenarios of systems exhibiting synchronization phenomena. The validity of the method is demonstrated with a variety of delay-coupled chaotic oscillators. In addition, we show that these results seamlessly transfer to local field potentials in cat visual cortex.
NASA Astrophysics Data System (ADS)
Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis
2011-03-01
Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2012-03-01
This paper addresses how to calculate and interpret the time-delayed mutual information (TDMI) for a complex, diversely and sparsely measured, possibly non-stationary population of time-series of unknown composition and origin. The primary vehicle used for this analysis is a comparison between the time-delayed mutual information averaged over the population and the time-delayed mutual information of an aggregated population (here, aggregation implies the population is conjoined before any statistical estimates are implemented). Through the use of information theoretic tools, a sequence of practically implementable calculations are detailed that allow for the average and aggregate time-delayed mutual information to be interpreted. Moreover, these calculations can also be used to understand the degree of homo or heterogeneity present in the population. To demonstrate that the proposed methods can be used in nearly any situation, the methods are applied and demonstrated on the time series of glucose measurements from two different subpopulations of individuals from the Columbia University Medical Center electronic health record repository, revealing a picture of the composition of the population as well as physiological features.
NASA Astrophysics Data System (ADS)
Tisdell, John G.; Ward, John R.; Capon, Tim
2004-09-01
This paper uses an experimental design that combines the use of an environmental levy with community involvement in the formation of group agreements and strategies to explore the impact of information and communication on water use in a complex heterogeneous environment. Participants in the experiments acted as farmers faced with monthly water demands, uncertain rainfall, possible crop loss, and the possibility of trading in water entitlements. The treatments included (1) no information on environmental consequences of extraction, (2) the provision of monthly aggregate environmental information, (3) the provision of monthly aggregate extraction information and a forum for discussion, and (4) the public provision of individual extraction information and a forum for discussion giving rise to potential verbal peer sanctions. To account for the impact of trade, the treatments were blocked into three market types: (1) no trade, (2) open call auctions, and (3) closed call auctions. The cost to the community of altering the natural flow regime to meet extractive demand was socialized through the imposition of an environmental levy equally imposed on all players.
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.
2005-08-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.
‘Selfish herds’ of guppies follow complex movement rules, but not when information is limited
Kimbell, Helen S.; Morrell, Lesley J.
2015-01-01
Under the threat of predation, animals can decrease their level of risk by moving towards other individuals to form compact groups. A significant body of theoretical work has proposed multiple movement rules, varying in complexity, which might underlie this process of aggregation. However, if and how animals use these rules to form compact groups is still not well understood, and how environmental factors affect the use of these rules even less so. Here, we evaluate the success of different movement rules, by comparing their predictions with the movement seen when shoals of guppies (Poecilia reticulata) form under the threat of predation. We repeated the experiment in a turbid environment to assess how the use of the movement rules changed when visual information is reduced. During a simulated predator attack, guppies in clear water used complex rules that took multiple neighbours into account, forming compact groups. In turbid water, the difference between all rule predictions and fish movement paths increased, particularly for complex rules, and the resulting shoals were more fragmented than in clear water. We conclude that guppies are able to use complex rules to form dense aggregations, but that environmental factors can limit their ability to do so. PMID:26400742
Robust synchronization of complex networks with uncertain couplings and incomplete information
NASA Astrophysics Data System (ADS)
Wang, Fan; Liang, Jinling; Wang, Zidong; Alsaadi, Fuad E.
2016-07-01
The mean square exponential (MSE) synchronization problem is investigated in this paper for complex networks with simultaneous presence of uncertain couplings and incomplete information, which comprise both the randomly occurring delay and the randomly occurring non-linearities. The network considered is uncertain with time-varying stochastic couplings. The randomly occurring delay and non-linearities are modelled by two Bernoulli-distributed white sequences with known probabilities to better describe realistic complex networks. By utilizing the coordinate transformation, the addressed complex network can be exponentially synchronized in the mean square if the MSE stability of a transformed subsystem can be assured. The stability problem is studied firstly for the transformed subsystem based on the Lyapunov functional method. Then, an easy-to-verify sufficient criterion is established by further decomposing the transformed system, which embodies the joint impacts of the single-node dynamics, the network topology and the statistical quantities of the uncertainties on the synchronization of the complex network. Numerical examples are exploited to illustrate the effectiveness of the proposed methods.
Super resolution nano-information recording in a new hydrazone metal complex material
NASA Astrophysics Data System (ADS)
Zhang, Kui; Wei, Jingsong; Chen, Zhimin; Wei, Tao; Geng, Yongyou; Wang, Yang; Wu, Yiqun
2016-10-01
Laser thermal lithography has been proposed for a few years, which has the advantages of breaking through the optical diffraction limit, operation in far-field and in air, and low production cost. In this paper, a new hydrazone metal complex is used as the laser thermal lithography material due to its feature of the one-step fabrication of micro/nano structure without mask and wet-etching process. Based on the laser thermal lithography method, super resolution nano-information pits are directly written on the surface of hydrazone metal complex thin films. Pits with a minimum feature size of about 79 nm are successfully obtained, which is only about 1/7 of the writing spot size. Moreover, the reactive ion etching method can be applied to transfer the pits onto a silica substrate. These results suggest the potential applications of the new material in high density optical data storage and semiconductor industries.
NASA Astrophysics Data System (ADS)
Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo
2009-03-01
We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher
2005-01-01
This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.
Markov and non-Markov processes in complex systems by the dynamical information entropy
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Gafarov, F. M.
1999-12-01
We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, shown here with photos of John Glenn in his historic Shuttle mission in October 1998, with an introduction to the range of activities and exhibits, plus honor the center's namesake, President John F. Kennedy. Other new additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, plus an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, plus an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
The newly expanded KSC Visitors Complex features a new ticket plaza, information center, exhibits an
NASA Technical Reports Server (NTRS)
1999-01-01
Part of the $13 million expansion to KSC's Visitor Complex, the new information center welcomes visitors to the Gateway to the Universe. The five large video walls provide an orientation video, with an introduction to the range of activities and exhibits, and honor the center's namesake, President John F. Kennedy. Other new additions include a walk-through Robot Scouts exhibit, a wildlife exhibit, and the film Quest for Life in a new 300-seat theater, and an International Space Station-themed ticket plaza, featuring a structure of overhanging solar panels and astronauts performing assembly tasks. The KSC Visitor Complex was inaugurated three decades ago and is now one of the top five tourist attractions in Florida. It is located on S.R. 407, east of I-95, within the Merritt Island National Wildlife Refuge.
NASA Astrophysics Data System (ADS)
Horn, Florian; Bayer, Florian; Pelzer, Georg; Rieger, Jens; Ritter, André; Weber, Thomas; Zang, Andrea; Michel, Thilo; Anton, Gisela
2014-03-01
Grating-based X-ray phase-contrast imaging is a promising imaging modality to increase soft tissue contrast in comparison to conventional attenuation-based radiography. Complementary and otherwise inaccessible information is provided by the dark-field image, which shows the sub-pixel size granularity of the measured object. This could especially turn out to be useful in mammography, where tumourous tissue is connected with the presence of supertiny microcalcifications. In addition to the well-established image reconstruction process, an analysis method was introduced by Modregger, 1 which is based on deconvolution of the underlying scattering distribution within a single pixel revealing information about the sample. Subsequently, the different contrast modalities can be calculated with the scattering distribution. The method already proved to deliver additional information in the higher moments of the scattering distribution and possibly reaches better image quality with respect to an increased contrast-to-noise ratio. Several measurements were carried out using melamine foams as phantoms. We analysed the dependency of the deconvolution-based method with respect to the dark-field image on different parameters such as dose, number of iterations of the iterative deconvolution-algorithm and dark-field signal. A disagreement was found in the reconstructed dark-field values between the FFT method and the iterative method. Usage of the resulting characteristics might be helpful in future applications.
NASA Astrophysics Data System (ADS)
Li, Si Jia; Cao, Xiang Yu; Xu, Li Ming; Zhou, Long Jian; Yang, Huan Huan; Han, Jiang Feng; Zhang, Zhao; Zhang, Di; Liu, Xiao; Zhang, Chen; Zheng, Yue Jun; Zhao, Yi
2016-11-01
We proposed an ultra-broadband reflective metamaterial with controlling the scattering electromagnetic fields based on a polarization convertor. The unit cell of the polarization convertor was composed of a three layers substrate with double metallic split-rings structure and a metal ground plane. The proposed polarization convertor and that with rotation angle of 90 deg had been employed as the “0” and “1” elements to design the digital reflective metamaterial. The numbers of the “0” and “1” elements were chosen based on the information entropy theory. Then, the optimized combinational format was selected by genetic optimization algorithm. The scattering electromagnetic fields had been manipulated due to destructive interference, which was attributed to the control of phase and amplitude by the proposed polarization convertor. Simulated and experimental results indicated that the reflective metamaterial exhibited significantly RCS reduction in an ultra-broad frequency band for both normal and oblique incidences.
Li, Si Jia; Cao, Xiang Yu; Xu, Li Ming; Zhou, Long Jian; Yang, Huan Huan; Han, Jiang Feng; Zhang, Zhao; Zhang, Di; Liu, Xiao; Zhang, Chen; Zheng, Yue Jun; Zhao, Yi
2016-11-22
We proposed an ultra-broadband reflective metamaterial with controlling the scattering electromagnetic fields based on a polarization convertor. The unit cell of the polarization convertor was composed of a three layers substrate with double metallic split-rings structure and a metal ground plane. The proposed polarization convertor and that with rotation angle of 90 deg had been employed as the "0" and "1" elements to design the digital reflective metamaterial. The numbers of the "0" and "1" elements were chosen based on the information entropy theory. Then, the optimized combinational format was selected by genetic optimization algorithm. The scattering electromagnetic fields had been manipulated due to destructive interference, which was attributed to the control of phase and amplitude by the proposed polarization convertor. Simulated and experimental results indicated that the reflective metamaterial exhibited significantly RCS reduction in an ultra-broad frequency band for both normal and oblique incidences.
Li, Si Jia; Cao, Xiang Yu; Xu, Li Ming; Zhou, Long Jian; Yang, Huan Huan; Han, Jiang Feng; Zhang, Zhao; Zhang, Di; Liu, Xiao; Zhang, Chen; Zheng, Yue Jun; Zhao, Yi
2016-01-01
We proposed an ultra-broadband reflective metamaterial with controlling the scattering electromagnetic fields based on a polarization convertor. The unit cell of the polarization convertor was composed of a three layers substrate with double metallic split-rings structure and a metal ground plane. The proposed polarization convertor and that with rotation angle of 90 deg had been employed as the “0” and “1” elements to design the digital reflective metamaterial. The numbers of the “0” and “1” elements were chosen based on the information entropy theory. Then, the optimized combinational format was selected by genetic optimization algorithm. The scattering electromagnetic fields had been manipulated due to destructive interference, which was attributed to the control of phase and amplitude by the proposed polarization convertor. Simulated and experimental results indicated that the reflective metamaterial exhibited significantly RCS reduction in an ultra-broad frequency band for both normal and oblique incidences. PMID:27874082
Crolley, R.; Thompson, M.
2011-01-31
There has been a need for a faster and cheaper deployment model for information technology (IT) solutions to address waste management needs at US Department of Energy (DOE) complex sites for years. Budget constraints, challenges in deploying new technologies, frequent travel, and increased job demands for existing employees have prevented IT organizations from staying abreast of new technologies or deploying them quickly. Despite such challenges, IT organizations have added significant value to waste management handling through better worker safety, tracking, characterization, and disposition at DOE complex sites. Systems developed for site-specific missions have broad applicability to waste management challenges and in many cases have been expanded to meet other waste missions. Radio frequency identification (RFID) and global positioning satellite (GPS)-enabled solutions have reduced the risk of radiation exposure and safety risks. New web-based and mobile applications have enabled precision characterization and control of nuclear materials. These solutions have also improved operational efficiencies and shortened schedules, reduced cost, and improved regulatory compliance. Collaboration between US Department of Energy (DOE) complex sites is improving time to delivery and cost efficiencies for waste management missions with new information technologies (IT) such as wireless computing, global positioning satellite (GPS), and radio frequency identification (RFID). Integrated solutions developed at separate DOE complex sites by new technology Centers of Excellence (CoE) have increased material control and accountability, worker safety, and environmental sustainability. CoEs offer other DOE sister sites significant cost and time savings by leveraging their technology expertise in project scoping, implementation, and ongoing operations.
Sakhanenko, Nikita A.
2015-01-01
Abstract Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables. The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the “shadows” of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate “shadows.” The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the “shadows” calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds. PMID:26335709
Sakhanenko, Nikita A; Galas, David J
2015-11-01
Information theory is valuable in multiple-variable analysis for being model-free and nonparametric, and for the modest sensitivity to undersampling. We previously introduced a general approach to finding multiple dependencies that provides accurate measures of levels of dependency for subsets of variables in a data set, which is significantly nonzero only if the subset of variables is collectively dependent. This is useful, however, only if we can avoid a combinatorial explosion of calculations for increasing numbers of variables. The proposed dependence measure for a subset of variables, τ, differential interaction information, Δ(τ), has the property that for subsets of τ some of the factors of Δ(τ) are significantly nonzero, when the full dependence includes more variables. We use this property to suppress the combinatorial explosion by following the "shadows" of multivariable dependency on smaller subsets. Rather than calculating the marginal entropies of all subsets at each degree level, we need to consider only calculations for subsets of variables with appropriate "shadows." The number of calculations for n variables at a degree level of d grows therefore, at a much smaller rate than the binomial coefficient (n, d), but depends on the parameters of the "shadows" calculation. This approach, avoiding a combinatorial explosion, enables the use of our multivariable measures on very large data sets. We demonstrate this method on simulated data sets, and characterize the effects of noise and sample numbers. In addition, we analyze a data set of a few thousand mutant yeast strains interacting with a few thousand chemical compounds.
ERIC Educational Resources Information Center
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.
1997-01-01
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
NASA Astrophysics Data System (ADS)
Greisch, Jean Francois; Harding, Michael E.; Chmela, Jiri; Klopper, Willem M.; Schooss, Detlef; Kappes, Manfred M.
2016-06-01
The application of lanthanoid complexes ranges from photovoltaics and light-emitting diodes to quantum memories and biological assays. Rationalization of their design requires a thorough understanding of intramolecular processes such as energy transfer, charge transfer, and non-radiative decay involving their subunits. Characterization of the excited states of such complexes considerably benefits from mass spectrometric methods since the associated optical transitions and processes are strongly affected by stoichiometry, symmetry, and overall charge state. We report herein spectroscopic measurements on ensembles of ions trapped in the gas phase and soft-landed in neon matrices. Their interpretation is considerably facilitated by direct comparison with computations. The combination of energy- and time-resolved measurements on isolated species with density functional as well as ligand-field and Franck-Condon computations enables us to infer structural as well as dynamical information about the species studied. The approach is first illustrated for sets of model lanthanoid complexes whose structure and electronic properties are systematically varied via the substitution of one component (lanthanoid or alkali,alkali-earth ion): (i) systematic dependence of ligand-centered phosphorescence on the lanthanoid(III) promotion energy and its impact on sensitization, and (ii) structural changes induced by the substitution of alkali or alkali-earth ions in relation with structures inferred using ion mobility spectroscopy. The temperature dependence of sensitization is briefly discussed. The focus is then shifted to measurements involving europium complexes with doxycycline an antibiotic of the tetracycline family. Besides discussing the complexes' structural and electronic features, we report on their use to monitor enzymatic processes involving hydrogen peroxide or biologically relevant molecules such as adenosine triphosphate (ATP).
Cognitive landscape and information: new perspectives to investigate the ecological complexity.
Farina, Almo; Bogaert, Jan; Schipani, Ileana
2005-01-01
Landscape ecology deals with ecological processes in their spatial context. It shares with ecosystem ecology the primate of emergent ecological disciplines. The aim of this contribution is to approach the definition of landscapes using cognitive paradigms. Neutral-based landscape (NbL), individual-based landscape (IbL) and observed-based landscape (ObL) are defined to explore the cognitive mechanisms. NbL represents the undecoded component of the cognitive matrix. The IbL is the portion of landscape perceived by the biological sensors. ObL is the part of the cognitive matrix perceived using the cultural background of the observer. The perceived landscape (PL) is composed by the sum of these three approaches of landscape perception. Two further types of information (sensu Stonier) are recognized in this process of perception: the compressed information, as it is present inside the cognitive matrix, and the decompressed information that will structure the PL when a semiotic relationship operates between the organisms and the cognitive matrix. Scaling properties of these three PL components are recognized in space and time. In NbL scale seems irrelevant, in IbL the perception is filtered by organismic scaling and in ObL the spatio-temporal scale seems of major importance. Definitively, perception is scale-dependent. A combination of the cognitive approach with information paradigms to study landscapes opens new perspectives in the interpretation of ecological complexity.
Zhang, Hai-Feng; Xie, Jia-Rong; Tang, Ming; Lai, Ying-Cheng
2014-12-01
The interplay between individual behaviors and epidemic dynamics in complex networks is a topic of recent interest. In particular, individuals can obtain different types of information about the disease and respond by altering their behaviors, and this can affect the spreading dynamics, possibly in a significant way. We propose a model where individuals' behavioral response is based on a generic type of local information, i.e., the number of neighbors that has been infected with the disease. Mathematically, the response can be characterized by a reduction in the transmission rate by a factor that depends on the number of infected neighbors. Utilizing the standard susceptible-infected-susceptible and susceptible-infected-recovery dynamical models for epidemic spreading, we derive a theoretical formula for the epidemic threshold and provide numerical verification. Our analysis lays on a solid quantitative footing the intuition that individual behavioral response can in general suppress epidemic spreading. Furthermore, we find that the hub nodes play the role of "double-edged sword" in that they can either suppress or promote outbreak, depending on their responses to the epidemic, providing additional support for the idea that these nodes are key to controlling epidemic spreading in complex networks.
Luo, Jiawei; Kuang, Ling
2014-10-01
Predicting essential proteins is highly significant because organisms can not survive or develop even if only one of these proteins is missing. Improvements in high-throughput technologies have resulted in a large number of available protein-protein interactions. By taking advantage of these interaction data, researchers have proposed many computational methods to identify essential proteins at the network level. Most of these approaches focus on the topology of a static protein interaction network. However, the protein interaction network changes with time and condition. This important inherent dynamics of the protein interaction network is overlooked by previous methods. In this paper, we introduce a new method named CDLC to predict essential proteins by integrating dynamic local average connectivity and in-degree of proteins in complexes. CDLC is applied to the protein interaction network of Saccharomyces cerevisiae. The results show that CDLC outperforms five other methods (Degree Centrality (DC), Local Average Connectivity-based method (LAC), Sum of ECC (SoECC), PeC and Co-Expression Weighted by Clustering coefficient (CoEWC)). In particular, CDLC could improve the prediction precision by more than 45% compared with DC methods. CDLC is also compared with the latest algorithm CEPPK, and a higher precision is achieved by CDLC. CDLC is available as Supplementary materials. The default settings of active threshold and alpha-parameter are 0.8 and 0.1, respectively.
Konovalova, Anna; Mitchell, Angela M; Silhavy, Thomas J
2016-01-01
Lipoprotein RcsF is the OM component of the Rcs envelope stress response. RcsF exists in complexes with β-barrel proteins (OMPs) allowing it to adopt a transmembrane orientation with a lipidated N-terminal domain on the cell surface and a periplasmic C-terminal domain. Here we report that mutations that remove BamE or alter a residue in the RcsF trans-lumen domain specifically prevent assembly of the interlocked complexes without inactivating either RcsF or the OMP. Using these mutations we demonstrate that these RcsF/OMP complexes are required for sensing OM outer leaflet stress. Using mutations that alter the positively charged surface-exposed domain, we show that RcsF monitors lateral interactions between lipopolysaccharide (LPS) molecules. When these interactions are disrupted by cationic antimicrobial peptides, or by the loss of negatively charged phosphate groups on the LPS molecule, this information is transduced to the RcsF C-terminal signaling domain located in the periplasm to activate the stress response. DOI: http://dx.doi.org/10.7554/eLife.15276.001 PMID:27282389
Borthwick, Kenneth M; Smelser, Diane T; Bock, Jonathan A; Elmore, James R; Ryer, Evan J; Ye, Zi; Pacheco, Jennifer A.; Carrell, David S.; Michalkiewicz, Michael; Thompson, William K; Pathak, Jyotishman; Bielinski, Suzette J; Denny, Joshua C; Linneman, James G; Peissig, Peggy L; Kho, Abel N; Gottesman, Omri; Parmar, Harpreet; Kullo, Iftikhar J; McCarty, Catherine A; Böttinger, Erwin P; Larson, Eric B; Jarvik, Gail P; Harley, John B; Bajwa, Tanvir; Franklin, David P; Carey, David J; Kuivaniemi, Helena; Tromp, Gerard
2015-01-01
Background and objective We designed an algorithm to identify abdominal aortic aneurysm cases and controls from electronic health records to be shared and executed within the “electronic Medical Records and Genomics” (eMERGE) Network. Materials and methods Structured Query Language, was used to script the algorithm utilizing “Current Procedural Terminology” and “International Classification of Diseases” codes, with demographic and encounter data to classify individuals as case, control, or excluded. The algorithm was validated using blinded manual chart review at three eMERGE Network sites and one non-eMERGE Network site. Validation comprised evaluation of an equal number of predicted cases and controls selected at random from the algorithm predictions. After validation at the three eMERGE Network sites, the remaining eMERGE Network sites performed verification only. Finally, the algorithm was implemented as a workflow in the Konstanz Information Miner, which represented the logic graphically while retaining intermediate data for inspection at each node. The algorithm was configured to be independent of specific access to data and was exportable (without data) to other sites. Results The algorithm demonstrated positive predictive values (PPV) of 92.8% (CI: 86.8-96.7) and 100% (CI: 97.0-100) for cases and controls, respectively. It performed well also outside the eMERGE Network. Implementation of the transportable executable algorithm as a Konstanz Information Miner workflow required much less effort than implementation from pseudo code, and ensured that the logic was as intended. Discussion and conclusion This ePhenotyping algorithm identifies abdominal aortic aneurysm cases and controls from the electronic health record with high case and control PPV necessary for research purposes, can be disseminated easily, and applied to high-throughput genetic and other studies. PMID:27054044
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems
Bassett, Danielle S; Greenfield, Daniel L; Meyer-Lindenberg, Andreas; Weinberger, Daniel R; Moore, Simon W; Bullmore, Edward T
2010-04-22
Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.
Power-law ansatz in complex systems: Excessive loss of information
NASA Astrophysics Data System (ADS)
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Power-law ansatz in complex systems: Excessive loss of information.
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Gómez-Hernández, J Jaime
2006-01-01
It is difficult to define complexity in modeling. Complexity is often associated with uncertainty since modeling uncertainty is an intrinsically difficult task. However, modeling uncertainty does not require, necessarily, complex models, in the sense of a model requiring an unmanageable number of degrees of freedom to characterize the aquifer. The relationship between complexity, uncertainty, heterogeneity, and stochastic modeling is not simple. Aquifer models should be able to quantify the uncertainty of their predictions, which can be done using stochastic models that produce heterogeneous realizations of aquifer parameters. This is the type of complexity addressed in this article.
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Smith, Richard D.
2007-01-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/− 5 ppm and 1 ppm) and NET value (no constraint, +/− 0.05 and 0.01 on a 0–1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LC-MS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate measurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/− 1 ppm and elution time measurements within +/− 0.01 NET. PMID:15979333
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
Carvalho, Benilton S.; Bilevicius, Elizabeth; Alvim, Marina K. M.; Lopes-Cendes, Iscia
2017-01-01
Mesial temporal lobe epilepsy is the most common form of adult epilepsy in surgical series. Currently, the only characteristic used to predict poor response to clinical treatment in this syndrome is the presence of hippocampal sclerosis. Single nucleotide polymorphisms (SNPs) located in genes encoding drug transporter and metabolism proteins could influence response to therapy. Therefore, we aimed to evaluate whether combining information from clinical variables as well as SNPs in candidate genes could improve the accuracy of predicting response to drug therapy in patients with mesial temporal lobe epilepsy. For this, we divided 237 patients into two groups: 75 responsive and 162 refractory to antiepileptic drug therapy. We genotyped 119 SNPs in ABCB1, ABCC2, CYP1A1, CYP1A2, CYP1B1, CYP2C9, CYP2C19, CYP2D6, CYP2E1, CYP3A4, and CYP3A5 genes. We used 98 additional SNPs to evaluate population stratification. We assessed a first scenario using only clinical variables and a second one including SNP information. The random forests algorithm combined with leave-one-out cross-validation was used to identify the best predictive model in each scenario and compared their accuracies using the area under the curve statistic. Additionally, we built a variable importance plot to present the set of most relevant predictors on the best model. The selected best model included the presence of hippocampal sclerosis and 56 SNPs. Furthermore, including SNPs in the model improved accuracy from 0.4568 to 0.8177. Our findings suggest that adding genetic information provided by SNPs, located on drug transport and metabolism genes, can improve the accuracy for predicting which patients with mesial temporal lobe epilepsy are likely to be refractory to drug treatment, making it possible to identify patients who may benefit from epilepsy surgery sooner. PMID:28052106
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-01-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047
Maximal entropy coverings and the information dimension of a complex network
NASA Astrophysics Data System (ADS)
Rosenberg, Eric
2017-02-01
Computing the information dimension dI of a complex network G requires covering G by a minimal collection of "boxes" of size s to obtain a set of probabilities, computing the entropy H (s), and quantifying how H (s) scales with log s. We show that to determine whether dI ≤dB holds for G, where dB is the box counting dimension, it is not sufficient to determine a minimal covering for each s. We introduce the new notion of a maximal entropy minimal covering of G, and a corresponding new definition of dI. The use of maximal entropy minimal coverings in many cases enhances the ability to compute dI.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
NASA Astrophysics Data System (ADS)
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Zavala-Yoé, Ricardo; Ramírez-Mendoza, Ricardo; Cordero, Luz M
2015-01-01
Epilepsy demands a major burden at global levels. Worldwide, about 1% of people suffer epilepsy and 30% of them (0.3%) are anticonvulsants resistant. Among them, some children epilepsies are peculiarly difficult to deal with as Doose syndrome (DS). Doose syndrome is a very complicated type of children cryptogenic refractory epilepsy (CCRE) which is traditionally studied by analysis of complex electrencephalograms (EEG) by neurologists. CCRE are affections which evolve in a course of many years and customarily, questions such as on which year was the kid healthiest (less seizures) and on which region of the brain (channel) the affection has been progressing more negatively are very difficult or even impossible to answer as a result of the quantity of EEG recorded through the patient's life. These questions can now be answered by the application of entropies to massive information contained in many EEG. CCRE can not always be cured and have not been investigated from a mathematical viewpoint as far as we are concerned. In this work, a set of 80 time series (distributed equally in four yearly recorded EEG) is studied in order to support pediatrician neurologists to understand better the evolution of this syndrome in the long term. Our contribution is to support multichannel long term analysis of CCRE by observing simple entropy plots instead of studying long rolls of traditional EEG graphs. A comparative analysis among aproximate entropy, sample entropy, our versions of multiscale entropy (MSE) and composite multiscale entropy revealed that our refined MSE was the most convenient complexity measure to describe DS. Additionally, a new entropy parameter is proposed and is referred to as bivariate MSE (BMSE). Such BMSE will provide graphical information in much longer term than MSE.
Synchronization, TIGoRS, and Information Flow in Complex Systems: Dispositional Cellular Automata.
Sulis, William H
2016-04-01
Synchronization has a long history in physics where it refers to the phase matching of two identical oscillators. This notion has been extensively studied in physics as well as in biology, where it has been applied to such widely varying phenomena as the flashing of fireflies and firing of neurons in the brain. Human behavior, however, may be recurrent but it is not oscillatory even though many physiological systems do exhibit oscillatory tendencies. Moreover, much of human behaviour is collaborative and cooperative, where the individual behaviours may be distinct yet contemporaneous (if not simultaneous) and taken collectively express some functionality. In the context of behaviour, the important aspect is the repeated co-occurrence in time of behaviours that facilitate the propagation of information or of functionality, regardless of whether or not these behaviours are similar or identical. An example of this weaker notion of synchronization is transient induced global response synchronization (TIGoRS). Previous work has shown that TIGoRS is a ubiquitous phenomenon among complex systems, enabling them to stably parse environmental transients into salient units to which they stably respond. This leads to the notion of Sulis machines, which emergently generate a primitive linguistic structure through their dynamics. This article reviews the notion of TIGoRS and its expression in several complex systems models including tempered neural networks, driven cellular automata and cocktail party automata. The emergent linguistics of Sulis machines are discussed. A new class of complex systems model, the dispositional cellular automaton is introduced. A new metric for TIGoRS, the excess synchronization, is introduced and applied to the study of TIGoRS in dispositional cellular automata. It is shown that these automata exhibit a nonlinear synchronization response to certain perturbing transients.
Sittig, Dean F.; Singh, Hardeep
2011-01-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an 8-dimensional model specifically designed to address the socio-technical challenges involved in design, development, implementation, use, and evaluation of HIT within complex adaptive healthcare systems. The 8 dimensions are not independent, sequential, or hierarchical, but rather are interdependent and interrelated concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support, and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the “language” of clinical applications. The human computer interface includes all aspects of the computer that users can see, touch, or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end-user, including potential patient-users. Workflow and communication are the processes or steps involved in assuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organizational features (e.g., policies, procedures, and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation. PMID:20959322
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.
2015-01-01
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764
NASA Technical Reports Server (NTRS)
Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.
2011-01-01
Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X
2015-12-26
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
NASA Astrophysics Data System (ADS)
Hou, W.; Wang, J.; Xu, X.; Leitch, J. W.; Delker, T.; Chen, G.
2015-12-01
This paper includes a series of studies that aim to develop a hyperspectral remote sensing technique for retrieving aerosol properties from a newly developed instrument GEO-TASO (Geostationary Trance gas and Aerosol Sensor Optimization) that measures the radiation at 0.4-0.7 wavelengths at spectral resolution of 0.02 nm. GEOS-TASO instrument is a prototype instrument of TEMPO (Tropospheric Emissions: Monitoring of Pollution), which will be launched in 2022 to measure aerosols, O3, and other trace gases from a geostationary orbit over the N-America. The theoretical framework of optimized inversion algorithm and the information content analysis such as degree of freedom for signal (DFS) will be discussed for hyperspectral remote sensing in visible bands, as well as the application to GEO-TASO, which has mounted on the NASA HU-25C aircraft and gathered several days' of airborne hyperspectral data for our studies. Based on the optimization theory and different from the traditional lookup table (LUT) retrieval technique, our inversion method intends to retrieve the aerosol parameters and surface reflectance simultaneously, in which UNL-VRTM (UNified Linearized Radiative Transfer Model) is employed for forward model and Jacobians calculation, meanwhile, principal component analysis (PCA) is used to constrain the hyperspectral surface reflectance.The information content analysis provides the theoretical analysis guidance about what kind of aerosol parameters could be retrieved from GeoTASO hyperspectral remote sensing to the practical inversion study. Besides, the inversion conducted iteratively until the modeled spectral radiance fits with GeoTASO measurements by a Quasi-Newton method called L-BFGS-B (Large scale BFGS Bound constrained). Finally, the retrieval results of aerosol optical depth and other aerosol parameters are compared against those retrieved by AEROENT and/or in situ measurements such as DISCOVER-AQ during the aircraft campaign.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Williams, Diane L; Minshew, Nancy J; Goldstein, Gerald
2015-10-01
More than 20 years ago, Minshew and colleagues proposed the Complex Information Processing model of autism in which the impairment is characterized as a generalized deficit involving multiple modalities and cognitive domains that depend on distributed cortical systems responsible for higher order abilities. Subsequent behavioral work revealed a related dissociation between concept formation and concept identification in autism suggesting the lack of an underlying organizational structure to manage increases in processing loads. The results of a recent study supported the impact of this relative weakness in conceptual reasoning on adaptive functioning in children and adults with autism. In this study, we provide further evidence of the difficulty relatively able older adolescents and adults with autism have with conceptual reasoning and provide evidence that this characterizes their difference from age- and ability-matched controls with typical development better than their differences in language. For verbal adults with autism, language may serve as a bootstrap or compensatory mechanism for learning but cannot overcome an inherent weakness in concept formation that makes information processing challenging as task demands increase.
Isotopically non-stationary metabolic flux analysis: complex yet highly informative.
Wiechert, Wolfgang; Nöh, Katharina
2013-12-01
Metabolic flux analysis (MFA) using isotopic tracers aims at the experimental determination of in vivo reaction rates (fluxes). In recent years, the well-established 13C-MFA method based on metabolic and isotopic steady state was extended to INST-MFA (isotopically non-stationary MFA), which is performed in a transient labeling state. INST-MFA offers short-time experiments with a maximal information gain, and can moreover be applied to a wider range of growth conditions or organisms. Some of these conditions are not accessible by conventional methods. This comes at the price of significant methodological complexity involving high-frequency sampling and quenching, precise analysis of many samples and an extraordinary computational effort. This review gives a brief overview of basic principles, experimental workflows, and recent progress in this field. Special emphasis is laid on the trade-off between total effort and information gain, particularly on the suitability of INST-MFA for certain types of biological questions. In order to integrate INST-MFA as a viable method into the toolbox of MFA, some major challenges must be addressed in the coming years. These are discussed in the outlook.
Williams, Jonathan P; Lough, Julie Ann; Campuzano, Iain; Richardson, Keith; Sadler, Peter J
2009-11-01
We report the development of an enhanced algorithm for the calculation of collision cross-sections in combination with Travelling-Wave ion mobility mass spectrometry technology and its optimisation and evaluation through the analysis of an organoruthenium anticancer complex [(eta6-biphenyl)Ru(II)(en)Cl]+. Excellent agreement was obtained between the experimentally determined and theoretically determined collision cross-sections of the complex and its major product ion formed via collision-induced dissociation. Collision cross-sections were also experimentally determined for adducts of this ruthenium complex with the single-stranded oligonucleotide hexamer d(CACGTG). Ion mobility tandem mass spectrometry measurements have allowed the binding sites for ruthenium on the oligonucleotide to be determined.
Online Community Detection for Large Complex Networks
Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian
2014-01-01
Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683
NASA Astrophysics Data System (ADS)
Chaikovskaya, L.; Dubovik, O.; Litvinov, P.; Grudo, J.; Lopatsin, A.; Chaikovsky, A.; Denisov, S.
2015-01-01
Inversion algorithms and program packages recently created for processing data of the ground-based radiometer spectral measurements along with lidar multi-wavelength measurements are extremely multiparametric. Therefore, it is very important to develop an efficient program module for computations of functions modeling measurements by a sun-radiometer in the inversion procedure. In this paper, we present the analytical version of such efficient algorithm and analytical code on C++ designed for performance of algorithm testing. The code computes multiple scattering of the Sun light in the atmosphere. Data output are the radiance and linear polarization parameters angular patterns at a preselected altitude. The atmosphere model with mixed aerosol and molecular scattering is given approximately as the homogeneous atmosphere model. The algorithm testing has been carried out by comparison of computed data with accurate data obtained on the base of the discrete-ordinate code. Errors of estimates of downward radiance above the Earth surface turned out to be within 10%-15%.. The analytical solution construction concept has taken from the scalar task of solar radiation transfer in the atmosphere where an approximate analytical solution was developed. Taking into account the fact that aerosol phase functions are highly forward elongated, the multi-component method of solving vector transfer equations and small-angle approximation have been used. Generalization of the scalar approach to the polarization parameters is described.
McDonough, Ian M.; Nashiro, Kaoru
2014-01-01
An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130
NASA Astrophysics Data System (ADS)
Yin, Zhendong; Zong, Zhiyuan; Sun, Hongjian; Wu, Zhilu; Yang, Zhutian
2012-12-01
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector. As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the BER performance and the near-far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
Boenstrup, M; Feldheim, J; Heise, K; Gerloff, C; Hummel, F C
2014-09-01
Complex movements require the interplay of local activation and interareal communication of sensorimotor brain regions. This is reflected in a decrease of task-related spectral power over the sensorimotor cortices and an increase in functional connectivity predominantly in the upper alpha band in the electroencephalogram (EEG). In the present study, directionality of information flow was investigated using EEG recordings to gain better understanding about the network architecture underlying the performance of complex sequential finger movements. This was assessed by means of Granger causality-derived directed transfer function (DTF). As DTF measures the influence one signal exerts on another based on a time lag between them, it allows implications to be drawn on causal relationships. To reveal causal connections between brain regions that are specifically modulated by task complexity, we contrasted the performance of right-handed sequential finger movements of different complexities (simple, scale, complex) that were either pre-learned (memorized) or novel instructed. A complexity-dependent increase in information flow from mesial frontocentral to the left motor cortex and, less pronounced, also to the right motor cortex specifically in the upper alpha range was found. Effective coupling during sequences of high complexity was larger for memorized sequences compared with novel sequences (P = 0.0037). These findings further support the role of mesial frontocentral areas in directing the primary motor cortex in the process of orchestrating complex movements and in particular learned sequences.
A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient
NASA Astrophysics Data System (ADS)
Shao, Fu-Bo; Li, Ke-Ping
2016-10-01
It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. Supported by the Fundamental Research Funds for the Central Universities under Grant No. 2016YJS087, the National Natural Science Foundation of China under Grant No. U1434209, and the Research Foundation of State Key Laboratory of Railway Traffic Control and Safety, Beijing Jiaotong University under Grant No. RCS2016ZJ001
2014-01-01
Background Children with medical complexity (CMC) are characterized by substantial family-identified service needs, chronic and severe conditions, functional limitations, and high health care use. Information exchange is critically important in high quality care of complex patients at high risk for poor care coordination. Written care plans for CMC are an excellent test case for how well information sharing is currently occurring. The purpose of this study was to identify the barriers to and facilitators of information sharing for CMC across providers, care settings, and families. Methods A qualitative study design with data analysis informed by a grounded theory approach was utilized. Two independent coders conducted secondary analysis of interviews with parents of CMC and health care professionals involved in the care of CMC, collected from two studies of healthcare service delivery for this population. Additional interviews were conducted with privacy officers of associated organizations to supplement these data. Emerging themes related to barriers and facilitators to information sharing were identified by the two coders and the research team, and a theory of facilitators and barriers to information exchange evolved. Results Barriers to information sharing were related to one of three major themes; 1) the lack of an integrated, accessible, secure platform on which summative health care information is stored, 2) fragmentation of the current health system, and 3) the lack of consistent policies, standards, and organizational priorities across organizations for information sharing. Facilitators of information sharing were related to improving accessibility to a common document, expanding the use of technology, and improving upon a structured communication plan. Conclusions Findings informed a model of how various barriers to information sharing interact to prevent optimal information sharing both within and across organizations and how the use of technology to
NASA Astrophysics Data System (ADS)
Urmanov, A. M.; Gribok, A. V.; Bozdogan, H.; Hines, J. W.; Uhrig, R. E.
2002-04-01
We propose an information complexity-based regularization parameter selection method for solution of ill conditioned inverse problems. The regularization parameter is selected to be the minimizer of the Kullback-Leibler (KL) distance between the unknown data-generating distribution and the fitted distribution. The KL distance is approximated by an information complexity criterion developed by Bozdogan. The method is not limited to the white Gaussian noise case. It can be extended to correlated and non-Gaussian noise. It can also account for possible model misspecification. We demonstrate the performance of the proposed method on a test problem from Hansen's regularization tools.
Tsanas, Athanasios; Zañartu, Matías; Little, Max A; Fox, Cynthia; Ramig, Lorraine O; Clifford, Gari D
2014-05-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F(0)) of speech signals. This study examines ten F(0) estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F(0) in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F(0) estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F(0) estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F(0) estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F(0) estimation is required.
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
2011-03-30
shown to be philosophically consistent with “ Granger causality ”, in that it measures directionality of causality (e.g., X causing Y) by assessing...coding with feedforward, has led to an optimal, low-complexity recursive scheme for source coding with feedforward causal side information [6]. By...feedback – I have become generally interested in high-level principles that underlie optimal structures of sequential, causal , information theoretic
Global Complexity: Information, Chaos, and Control at ASIS 1996 Annual Meeting.
ERIC Educational Resources Information Center
Jacob, M. E. L.
1996-01-01
Discusses proceedings of the 1996 ASIS (American Society for Information Science) annual meeting in Baltimore (Maryland), including chaos theory; electronic universities; distance education; intellectual property, including information privacy on the Internet; the need for leadership in libraries and information centers; information warfare and…
An explanatory model of peer education within a complex medicines information exchange setting.
Klein, Linda A; Ritchie, Jan E; Nathan, Sally; Wutzke, Sonia
2014-06-01
Studies of the effectiveness and value of peer education abound, yet there is little theoretical understanding of what lay educators actually do to help their peers. Although different theories have been proposed to explain components of peer education, a more complete explanatory model has not been established empirically that encompasses the many aspects of peer education and how these may operate together. The Australian Seniors Quality Use of Medicines Peer Education Program was developed, in conjunction with community partners, to improve understanding and management of medicines among older people - an Australian and international priority. This research investigated how peer educators facilitated learning about quality use of medicines among older Australians. Participatory action research was undertaken with volunteer peer educators, using a multi-site case study design within eight geographically-defined locations. Qualitative data from 27 participatory meetings with peer educators included transcribed audio recordings and detailed observational and interpretive notes, which were analysed using a grounded theory approach. An explanatory model arising from the data grouped facilitation of peer learning into four broad mechanisms: using educator skills; offering a safe place to learn; pushing for change; and reflecting on self. Peer educators' life experience as older people who have taken medicines was identified as an overarching contributor to peer learning. As lay persons, peer educators understood the potential disempowerment felt when seeking medicines information from health professionals and so were able to provide unique learning experiences that encouraged others to be 'active partners' in their own medicines management. These timely findings are linked to existing education and behaviour change theories, but move beyond these by demonstrating how the different elements of what peer educators do fit together. In-depth examination of peer educators
QCCM Center for Quantum Algorithms
2008-10-17
and A. Ekert and C. Macchiavello and M. Mosca quant-ph/0609160v1 Phase map decompositions for unitaries Niel de Beaudrap, Vincent Danos, Elham...Quantum Algorithms and Complexity M. Mosca Proceedings of NATO ASI Quantum Computation and Information 2005, Chania, Crete, Greece, IOS Press (2006), in...press Quantum Cellular Automata and Single Spin Measurement C. Perez, D. Cheung, M. Mosca , P. Cappellaro, D. Cory Proceedings of Asian Conference on
Phillips, Andrew B; Merrill, Jacqueline
2012-01-01
Many complex markets such as banking and manufacturing have benefited significantly from technology adoption. Each of these complex markets experienced increased efficiency, quality, security, and customer involvement as a result of technology transformation in their industry. Healthcare has not benefited to the same extent. We provide initial findings from a policy analysis of complex markets and the features of these transformations that can influence health technology adoption and acceptance.
ERIC Educational Resources Information Center
Flood, Bernadette; Henman, Martin C.
2015-01-01
People with intellectual disabilities may be "invisible" to pharmacists. They are a complex group of patients many of whom have diabetes. Pharmacists may have little experience of the challenges faced by this high risk group of patients who may be prescribed high risk medications. This case report details information supplied by Pat, a…
ERIC Educational Resources Information Center
Doskey, Steven Craig
2014-01-01
This research presents an innovative means of gauging Systems Engineering effectiveness through a Systems Engineering Relative Effectiveness Index (SE REI) model. The SE REI model uses a Bayesian Belief Network to map causal relationships in government acquisitions of Complex Information Systems (CIS), enabling practitioners to identify and…
NASA Astrophysics Data System (ADS)
Soriano, Miguel C.; Zunino, Luciano; Rosso, Osvaldo A.; Mirasso, Claudio R.
2010-04-01
The time evolution of the output of a semiconductor laser subject to optical feedback can exhibit high-dimensional chaotic fluctuations. In this contribution, our aim is to quantify the complexity of the chaotic time-trace generated by a semiconductor laser subject to delayed optical feedback. To that end, we discuss the properties of two recently introduced complexity measures based on information theory, namely the permutation entropy (PE) and the statistical complexity measure (SCM). The PE and SCM are defined as a functional of a symbolic probability distribution, evaluated using the Bandt-Pompe recipe to assign a probability distribution function to the time series generated by the chaotic system. In order to evaluate the performance of these novel complexity quantifiers, we compare them to a more standard chaos quantifier, namely the Kolmogorov-Sinai entropy. Here, we present numerical results showing that the statistical complexity and the permutation entropy, evaluated at the different time-scales involved in the chaotic regime of the laser subject to optical feedback, give valuable information about the complexity of the laser dynamics.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup.
ERIC Educational Resources Information Center
Gourd, William
Confined to the interaction of complexity/simplicity of the stimulus play, this paper both focuses on the differing patterns of response between cognitively complex and cognitively simple persons to the characters in "The Homecoming" and "Private Lives" and attempts to determine the responses to specific characters or groups of…
ERIC Educational Resources Information Center
Hiebert, Elfrieda H.
2011-01-01
A focus of the Common Core State Standards/English Language Arts (CCSS/ELA) is that students become increasingly more capable with complex text over their school careers. This focus has redirected attention to the measurement of text complexity. Although CCSS/ELA suggests multiple criteria for this task, the standards offer a single measure of…
ERIC Educational Resources Information Center
Kuhn, John R., Jr.
2009-01-01
Drawing upon the theories of complexity and complex adaptive systems and the Singerian Inquiring System from C. West Churchman's seminal work "The Design of Inquiring Systems" the dissertation herein develops a systems design theory for continuous auditing systems. The dissertation consists of discussion of the two foundational theories,…
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
NASA Technical Reports Server (NTRS)
Dasarathy, B. V.
1976-01-01
An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.
Sobol-Shikler, Tal; Robinson, Peter
2010-07-01
We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.
Accurate refinement of docked protein complexes using evolutionary information and deep learning.
Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit
2016-06-01
One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.
NASA Astrophysics Data System (ADS)
Mazhari, N. S.; Momeni, Davood; Bahamonde, Sebastian; Faizal, Mir; Myrzakulov, Ratbay
2017-03-01
The holographic complexity and fidelity susceptibility have been defined as new quantities dual to different volumes in AdS. In this paper, we will use these new proposals to calculate both of these quantities for a variety of interesting deformations of AdS. We obtain the holographic complexity and fidelity susceptibility for an AdS black hole, Janus solution, a solution with cylindrical symmetry, an inhomogeneous background and a hyperscaling violating background. It is observed that the holographic complexity depends on the size of the subsystem for all these solutions and the fidelity susceptibility does not have any such dependence.
NASA Astrophysics Data System (ADS)
Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong
2015-07-01
The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.
A region labeling algorithm based on block
NASA Astrophysics Data System (ADS)
Wang, Jing
2009-10-01
The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.
Using Read-Alouds to Help Struggling Readers Access and Comprehend Complex, Informational Text
ERIC Educational Resources Information Center
Santoro, Lana Edwards; Baker, Scott K.; Fien, Hank; Smith, Jean Louise M.; Chard, David J.
2016-01-01
The use of informational texts in the elementary grades provides a context for helping students develop content understanding and domain knowledge across a wide range of subject matter. Reading informational text also provides students with the language of thought, foundational vocabulary that can be connected to other words, and technical content…
Changing constructions of informed consent: qualitative research and complex social worlds.
Miller, Tina; Boulton, Mary
2007-12-01
Informed consent is a concept which attempts to capture and convey what is regarded as the appropriate relationship between researcher and research participant. Definitions have traditionally emphasised respect for autonomy and the right to self-determination of the individual. However, the meaning of informed consent and the values on which it is based are grounded in society and the practicalities of social relationships. As society changes, so too do the meaning and practice of informed consent. In this paper, we trace the ways in which the meaning and practice of informed consent has changed over the last 35 years with reference to four qualitative studies of parenting and children in the UK which we have undertaken at different points in our research careers. We focus in particular on the shifting boundaries between the professional and personal, and changing expressions of agency and power in a context of heightened perceptions of risk in everyday life. We also discuss developments in information and communication technologies as a factor in changing both the formal requirements for and the situated practicalities of obtaining informed consent. We conclude by considering the implications for informed consent of both increasing bureaucratic regulation and increasingly sophisticated information and communication technologies and suggest strategies for rethinking and managing 'consent' in qualitative research practice.
Chakshusmathi, G.; Ratnaparkhi, Girish S.; Madhu, P. K.; Varadarajan, R.
1999-01-01
Ordered protein complexes are often formed from partially ordered fragments that are difficult to structurally characterize by conventional NMR and crystallographic techniques. We show that concentration-dependent hydrogen exchange studies of a fragment complex can provide structural information about the solution structures of the isolated fragments. This general methodology can be applied to any bimolecular or multimeric system. The experimental system used here consists of Ribonuclease S, a complex of two fragments of Ribonuclease A. Ribonuclease S and Ribonuclease A have identical three-dimensional structures but exhibit significant differences in their dynamics and stability. We show that the apparent large dynamic differences between Ribonuclease A and Ribonuclease S are caused by small amounts of free fragments in equilibrium with the folded complex, and that amide exchange rates in Ribonuclease S can be used to determine corresponding rates in the isolated fragments. The studies suggest that folded RNase A and the RNase S complex exhibit very similar dynamic behavior. Thus cleavage of a protein chain at a single site need not be accompanied by a large increase in flexibility of the complex relative to that of the uncleaved protein. PMID:10393919
Intervention complexity--a conceptual framework to inform priority-setting in health.
Gericke, Christian A; Kurowski, Christoph; Ranson, M Kent; Mills, Anne
2005-04-01
Health interventions vary substantially in the degree of effort required to implement them. To some extent this is apparent in their financial cost, but the nature and availability of non-financial resources is often of similar importance. In particular, human resource requirements are frequently a major constraint. We propose a conceptual framework for the analysis of interventions according to their degree of technical complexity; this complements the notion of institutional capacity in considering the feasibility of implementing an intervention. Interventions are categorized into four dimensions: characteristics of the basic intervention; characteristics of delivery; requirements on government capacity; and usage characteristics. The analysis of intervention complexity should lead to a better understanding of supply- and demand-side constraints to scaling up, indicate priorities for further research and development, and can point to potential areas for improvement of specific aspects of each intervention to close the gap between the complexity of an intervention and the capacity to implement it. The framework is illustrated using the examples of scaling up condom social marketing programmes, and the DOTS strategy for tuberculosis control in highly resource-constrained countries. The framework could be used as a tool for policy-makers, planners and programme managers when considering the expansion of existing projects or the introduction of new interventions. Intervention complexity thus complements the considerations of burden of disease, cost-effectiveness, affordability and political feasibility in health policy decision-making. Reducing the technical complexity of interventions will be crucial to meeting the health-related Millennium Development Goals.
Rosati, Paola; Grossi, Armando; Inserra, Alessandro; Ubertini, Graziamaria; Ferro, Giusy; Baldini Ferroli, Barbara; Martini, Ludovica; Cotzia, Daniela
2016-11-01
As devices for learning, smart-web support (SWS) multimedia hypertexts on the web now provide more versatile and interactive reading systems than those traditionally available in static printed texts. Designing similar tools for clinical practice would make complex scientific information easier to comprehend, and present the various therapeutic options to patients as minimally alarming graphical representations. In a pilot project we intend to produce a SWS tool for parents or tutors of children with primary differentiated thyroid cancer (DTC), a heretofore rare disease whose incidence has increased over recent years. The SWS hypertexts, "pre-digested" by the multidisciplinary team caring for these children, will be inserted in a single web page (canvas) including shared sheets explaining the best surgical options (decision aids). To make evidence-based information easier to understand and help information sharing, the decision aids will combine text and graphics. The canvas will store data for the multimedia files in a cloud storage system, opened via a link. To measure parents' and tutors' understanding and appreciation of the information provided on the web, the canvas will include questionnaires to investigate satisfaction, eventual barrier encountered, and type of surgical therapy chosen. The SWS tool should allow users to obtain all the information in a relatively short time and improve parents' and children's satisfaction with the surgical options proposed. The results obtained will be useful for developing similar SWS devices for other complex paediatric diseases.