An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
An innovative thinking-based intelligent information fusion algorithm.
Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Information theoretic analysis of linear shift-invariant edge-detection operators
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2012-06-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
SDIA: A dynamic situation driven information fusion algorithm for cloud environment
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Tong; Wang, Jian
2017-09-01
Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
Genetic algorithm to solve the problems of lectures and practicums scheduling
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Apriani, R.; Sawaluddin; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.
2018-02-01
Generally, the scheduling process is done manually. However, this method has a low accuracy level, along with possibilities that a scheduled process collides with another scheduled process. When doing theory class and practicum timetable scheduling process, there are numerous problems, such as lecturer teaching schedule collision, schedule collision with another schedule, practicum lesson schedules that collides with theory class, and the number of classrooms available. In this research, genetic algorithm is implemented to perform theory class and practicum timetable scheduling process. The algorithm will be used to process the data containing lists of lecturers, courses, and class rooms, obtained from information technology department at University of Sumatera Utara. The result of scheduling process using genetic algorithm is the most optimal timetable that conforms to available time slots, class rooms, courses, and lecturer schedules.
Predictive minimum description length principle approach to inferring gene regulatory networks.
Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping
2011-01-01
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.
Saeed, Faisal; Salim, Naomie; Abdo, Ammar
2013-07-01
Many consensus clustering methods have been applied in different areas such as pattern recognition, machine learning, information theory and bioinformatics. However, few methods have been used for chemical compounds clustering. In this paper, an information theory and voting based algorithm (Adaptive Cumulative Voting-based Aggregation Algorithm A-CVAA) was examined for combining multiple clusterings of chemical structures. The effectiveness of clusterings was evaluated based on the ability of the clustering method to separate active from inactive molecules in each cluster, and the results were compared with Ward's method. The chemical dataset MDL Drug Data Report (MDDR) and the Maximum Unbiased Validation (MUV) dataset were used. Experiments suggest that the adaptive cumulative voting-based consensus method can improve the effectiveness of combining multiple clusterings of chemical structures. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2003-04-01
gener- ally considered to be passive data . Instead the genetic material should be capable of being algorith - mic information, that is, program code or...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately. PMID:28036329
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
NASA Astrophysics Data System (ADS)
Montoya Villena, Rafael
According to its title, the general objective of the Thesis consists in developing a clear, simple and systematic methodology for programming type PLC devices. With this aim in mind, we will use the following elements: Codification of all variables types. This section is very important since it allows us working with little information. The necessary rules are given to codify all type of phrases produced in industrial processes. An algorithm that describes process evolution and that has been called process D.F. This is one of the most important contributions, since it will allow us, together with information codification, representing the process evolution in a graphic way and with any design theory used. Theory selection. Evidently, the use of some kind of design method is necessary to obtain logic equations. For this particular case, we will use binodal theory, an ideal theory for wired technologies, since it can obtain highly reduced schemas for relatively simple automatisms, which means a minimum number of components used. User program outline algorithm (D.F.P.). This is another necessary contribution and perhaps the most important one, since logic equations resulting from binodal theory are compatible with process evolution if wired technology is used, whether it is electric, electronic, pneumatic, etc. On the other hand, PLC devices performance characteristics force the program instructions order to validate or not the automatism, as we have proven in different articles and lectures at congresses both national and international. Therefore, we will codify any information concerning the automating process, graphically represent its temporal evolution and, applying binodal theory and D.F.P (previously adapted), succeed in making logic equations compatible with the process to be automated and the device in which they will be implemented (PLC in our case)
An information theory criteria based blind method for enumerating active users in DS-CDMA system
NASA Astrophysics Data System (ADS)
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
Cao, Youfang; Wang, Lianjie; Xu, Kexue; Kou, Chunhai; Zhang, Yulei; Wei, Guifang; He, Junjian; Wang, Yunfang; Zhao, Liping
2005-07-26
A new algorithm for assessing similarity between primer and template has been developed based on the hypothesis that annealing of primer to template is an information transfer process. Primer sequence is converted to a vector of the full potential hydrogen numbers (3 for G or C, 2 for A or T), while template sequence is converted to a vector of the actual hydrogen bond numbers formed after primer annealing. The former is considered as source information and the latter destination information. An information coefficient is calculated as a measure for fidelity of this information transfer process and thus a measure of similarity between primer and potential annealing site on template. Successful prediction of PCR products from whole genomic sequences with a computer program based on the algorithm demonstrated the potential of this new algorithm in areas like in silico PCR and gene finding.
2002-08-01
the measurement noise, as well as the physical model of the forward scattered electric field. The Bayesian algorithms for the Uncertain Permittivity...received at multiple sensors. In this research project a tissue- model -based signal-detection theory approach for the detection of mammary tumors in the...oriented information processors. In this research project a tissue- model - based signal detection theory approach for the detection of mammary tumors in the
Security Criteria for Distributed Systems: Functional Requirements.
1995-09-01
Open Company Limited. Ziv , J. and A. Lempel . 1977. A Universal Algorithm for Sequential Data Compression . IEEE Transactions on Information Theory Vol...3, SCF-5 DCF-7. Configurable Cryptographic Algorithms (a) It shall be possible to configure the system such that the data confidentiality functions...use different cryptographic algorithms for different protocols (e.g., mail or interprocess communication data ). (b) The modes of encryption
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-01-01
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency. PMID:26907272
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-02-19
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.
An Improved Image Matching Method Based on Surf Algorithm
NASA Astrophysics Data System (ADS)
Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.
2018-04-01
Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles
2009-01-01
In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.
Observation Uncertainty in Gaussian Sensor Networks
2006-01-23
Ziv , J., and Lempel , A. A universal algorithm for sequential data compression . IEEE Transactions on Information Theory 23, 3 (1977), 337–343. 73 ...using the Lempel - Ziv algorithm [42], context-tree weighting [41], or the Burrows-Wheeler Trans- form [4], [15], for example. These source codes will...and Computation (Monticello, IL, September 2004). [4] Burrows, M., and Wheeler, D. A block sorting lossless data compression algorithm . Tech.
A novel gene network inference algorithm using predictive minimum description length approach.
Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang
2010-05-28
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size.
NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference
NASA Astrophysics Data System (ADS)
Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.
2013-06-01
NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
Liang, Zhaohui; Liu, Jun; Huang, Jimmy X; Zeng, Xing
2018-01-01
The genetic polymorphism of Cytochrome P450 (CYP 450) is considered as one of the main causes for adverse drug reactions (ADRs). In order to explore the latent correlations between ADRs and potentially corresponding single-nucleotide polymorphism (SNPs) in CYP450, three algorithms based on information theory are used as the main method to predict the possible relation. The study uses a retrospective case-control study to explore the potential relation of ADRs to specific genomic locations and single-nucleotide polymorphism (SNP). The genomic data collected from 53 healthy volunteers are applied for the analysis, another group of genomic data collected from 30 healthy volunteers excluded from the study are used as the control group. The SNPs respective on five loci of CYP2D6*2,*10,*14 and CYP1A2*1C, *1F are detected by the Applied Biosystem 3130xl. The raw data is processed by ChromasPro to detect the specific alleles on the above loci from each sample. The secondary data are reorganized and processed by R combined with the reports of ADRs from clinical reports. Three information theory based algorithms are implemented for the screening task: JMI, CMIM, and mRMR. If a SNP is selected by more than two algorithms, we are confident to conclude that it is related to the corresponding ADR. The selection results are compared with the control decision tree + LASSO regression model. In the study group where ADRs occur, 10 SNPs are considered relevant to the occurrence of a specific ADR by the combined information theory model. In comparison, only 5 SNPs are considered relevant to a specific ADR by the decision tree + LASSO regression model. In addition, the new method detects more relevant pairs of SNP and ADR which are affected by both SNP and dosage. This implies that the new information theory based model is effective to discover correlations of ADRs and CYP 450 SNPs and is helpful in predicting the potential vulnerable genotype for some ADRs. The newly proposed information theory based model has superiority performance in detecting the relation between SNP and ADR compared to the decision tree + LASSO regression model. The new model is more sensitive to detect ADRs compared to the old method, while the old method is more reliable. Therefore, the selection criteria for selecting algorithms should depend on the pragmatic needs. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Cybersemiotics: a transdisciplinary framework for information studies.
Brier, S
1998-04-01
This paper summarizes recent attempts by this author to create a transdisciplinary, non-Cartesian and non-reductionistic framework for information studies in natural, social, and technological systems. To confront, in a scientific way, the problems of modern information technology where phenomenological man is dealing with socially constructed texts in algorithmically based digital bit-machines we need a theoretical framework spanning from physics over biology and technological design to phenomenological and social production of signification and meaning. I am working with such pragmatic theories as second order cybernetics (coupled with autopolesis theory), Lakoffs biologically oriented cognitive semantics, Peirce's triadic semiotics, and Wittgenstein's pragmatic language game theory. A coherent synthesis of these theories is what the cybersemiotic framework attempts to accomplish.
ATR applications of minimax entropy models of texture and shape
NASA Astrophysics Data System (ADS)
Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.
2001-10-01
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
A simplified computational memory model from information processing.
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-11-23
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Landau singularities from the amplituhedron
Dennen, T.; Prlina, I.; Spradlin, M.; ...
2017-06-28
We propose a simple geometric algorithm for determining the complete set of branch points of amplitudes in planar N = 4 super-Yang-Mills theory directly from the amplituhedron, without resorting to any particular representation in terms of local Feynman integrals. This represents a step towards translating integrands directly into integrals. In particular, the algorithm provides information about the symbol alphabets of general amplitudes. We illustrate the algorithm applied to the one- and two-loop MHV amplitudes.
Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy
Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao
2016-01-01
Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000
Mathematical Fundamentals of Probabilistic Semantics for High-Level Fusion
2013-12-02
understanding of the fundamental aspects of uncertainty representation and reasoning that a theory of hard and soft high-level fusion must encompass...representation and reasoning that a theory of hard and soft high-level fusion must encompass. Successful completion requires an unbiased, in-depth...and soft information is the lack of a fundamental HLIF theory , backed by a consistent mathematical framework and supporting algorithms. Although there
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu
2018-02-01
Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.
A simplified computational memory model from information processing
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-01-01
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Making Better Use of Bandwidth: Data Compression and Network Management Technologies
2005-01-01
data , the compression would not be a success. A key feature of the Lempel - Ziv family of algorithms is that the...citeseer.nj.nec.com/yu02motion.html. Ziv , J., and A. Lempel , “A Universal Algorithm for Sequential Data Compression ,” IEEE Transac- tions on Information Theory, Vol. 23, 1977, pp. 337–342. ...probability models – Lempel - Ziv – Prediction by partial matching The central component of a lossless compression algorithm
Hierarchical clustering using mutual information
NASA Astrophysics Data System (ADS)
Kraskov, A.; Stögbauer, H.; Andrzejak, R. G.; Grassberger, P.
2005-04-01
We present a conceptually simple method for hierarchical clustering of data called mutual information clustering (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). We use this both in the Shannon (probabilistic) version of information theory and in the Kolmogorov (algorithmic) version. We apply our method to the construction of phylogenetic trees from mitochondrial DNA sequences and to the output of independent components analysis (ICA) as illustrated with the ECG of a pregnant woman.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
NASA Astrophysics Data System (ADS)
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
NASA Astrophysics Data System (ADS)
Zhang, D.; Zhang, W. Y.
2017-08-01
Evacuation planning is an important activity in disaster management. It has to be planned in advance due to the unpredictable occurrence of disasters. It is necessary that the evacuation plans are as close as possible to the real evacuation work. However, the evacuation plan is extremely challenging because of the inherent uncertainty of the required information. There is a kind of vehicle routing problem based on the public traffic evacuation. In this paper, the demand for each evacuation set point is a fuzzy number, and each routing selection of the point is based on the fuzzy credibility preference index. This paper proposes an approximate optimal solution for this problem by the genetic algorithm based on the fuzzy reliability theory. Finally, the algorithm is applied to an optimization model, and the experiment result shows that the algorithm is effective.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.
Modeling and Performance Optimization of Large-Scale Data-Communication Networks.
1981-06-01
IT-17, no. 1, pp. 71-76, 1971. 12. Y. Ho, M. Kastner, and E. Wong, "Teams, market signalling, and information theory," IEEE Trans. Automat. Contr...modifies the flow assignment to satisfy end-to-end delay constraints. 3.2.1 Rationale for Min-Hop Strategr The Min-Hop algorithm proposed in this...Prentice-Hall, 1980. Ho, Y., M. Kostner and E. Wong, "Teams, market signalling, and information theory," IEEE Trans. Automat. Contr., vol. AC-23, pp
Change detection of bitemporal multispectral images based on FCM and D-S theory
NASA Astrophysics Data System (ADS)
Shi, Aiye; Gao, Guirong; Shen, Shaohong
2016-12-01
In this paper, we propose a change detection method of bitemporal multispectral images based on the D-S theory and fuzzy c-means (FCM) algorithm. Firstly, the uncertainty and certainty regions are determined by thresholding method applied to the magnitudes of difference image (MDI) and spectral angle information (SAI) of bitemporal images. Secondly, the FCM algorithm is applied to the MDI and SAI in the uncertainty region, respectively. Then, the basic probability assignment (BPA) functions of changed and unchanged classes are obtained by the fuzzy membership values from the FCM algorithm. In addition, the optimal value of fuzzy exponent of FCM is adaptively determined by conflict degree between the MDI and SAI in uncertainty region. Finally, the D-S theory is applied to obtain the new fuzzy partition matrix for uncertainty region and further the change map is obtained. Experiments on bitemporal Landsat TM images and bitemporal SPOT images validate that the proposed method is effective.
Power allocation for SWIPT in K-user interference channels using game theory
NASA Astrophysics Data System (ADS)
Wen, Zhigang; Liu, Ying; Liu, Xiaoqing; Li, Shan; Chen, Xianya
2018-12-01
A simultaneous wireless information and power transfer system in interference channels of multi-users is considered. In this system, each transmitter sends one data stream to its targeted receiver, which causes interference to other receivers. Since all transmitter-receiver links want to maximize their own average transmission rate, a power allocation problem under the transmit power constraints and the energy-harvesting constraints is developed. To solve this problem, we propose a game theory framework. Then, we convert the game into a variational inequalities problem by establishing the connection between game theory and variational inequalities and solve the variational inequalities problem. Through theoretical analysis, the existence and uniqueness of Nash equilibrium are both guaranteed by the theory of variational inequalities. A distributed iterative alternating optimization water-filling algorithm is derived, which is proved to converge. Numerical results show that the proposed algorithm reaches fast convergence and achieves a higher sum rate than the unaided scheme.
Mache: No-Loss Trace Compaction
1988-09-15
Data Compression . IEEE Computer 176 (June 1984), 8-19. 10. ZIV , J. AND LEMPEL , A. A Universal Algorithm for Sequential Data Com- pression. IEEE... compression scheme which takes ad- vantage of repeating patterns in the sequence of bytes. I have used the Lempel - Ziv compression algorithm [9,10,11...Transactions on Information Theory 23 (1976), 75-81. 11. ZIV , J. AND LEMPEL , A. Compression of Individual Sequences via Variable-
Remote sensing of suspended sediment water research: principles, methods, and progress
NASA Astrophysics Data System (ADS)
Shen, Ping; Zhang, Jing
2011-12-01
In this paper, we reviewed the principle, data, methods and steps in suspended sediment research by using remote sensing, summed up some representative models and methods, and analyzes the deficiencies of existing methods. Combined with the recent progress of remote sensing theory and application in water suspended sediment research, we introduced in some data processing methods such as atmospheric correction method, adjacent effect correction, and some intelligence algorithms such as neural networks, genetic algorithms, support vector machines into the suspended sediment inversion research, combined with other geographic information, based on Bayesian theory, we improved the suspended sediment inversion precision, and aim to give references to the related researchers.
The use of network theory to model disparate ship design information
NASA Astrophysics Data System (ADS)
Rigterink, Douglas; Piks, Rebecca; Singer, David J.
2014-06-01
This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Technical Reports Server (NTRS)
Buntine, Wray
1991-01-01
Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.
Quantum Algorithms for Fermionic Quantum Field Theories
2014-04-28
preskill@theory.caltech.edu 1 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is...NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98...operators of momentum modes. (The choice between these forms of measurement depends on the application.) 2.3 Complexity In this section we bound the
Balouchestani, Mohammadreza; Krishnan, Sridhar
2014-01-01
Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.
Using quantum theory to simplify input-output processes
NASA Astrophysics Data System (ADS)
Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile
2017-02-01
All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.
Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure
Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas
2015-01-01
Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014
Davis, Thomas D
2017-01-01
Practice evaluation strategies range in style from the formal-analytic tools of single-subject designs, rapid assessment instruments, algorithmic steps in evidence-informed practice, and computer software applications, to the informal-interactive tools of clinical supervision, consultation with colleagues, use of client feedback, and clinical experience. The purpose of this article is to provide practice researchers in social work with an evidence-informed theory that is capable of explaining both how and why social workers use practice evaluation strategies to self-monitor the effectiveness of their interventions in terms of client change. The author delineates the theoretical contours and consequences of what is called dual-process theory. Drawing on evidence-informed advances in the cognitive and social neurosciences, the author identifies among everyday social workers a theoretically stable, informal-interactive tool preference that is a cognitively necessary, sufficient, and stand-alone preference that requires neither the supplementation nor balance of formal-analytic tools. The author's delineation of dual-process theory represents a theoretical contribution in the century-old attempt to understand how and why social workers evaluate their practice the way they do.
Number Theory and Public-Key Cryptography.
ERIC Educational Resources Information Center
Lefton, Phyllis
1991-01-01
Described are activities in the study of techniques used to conceal the meanings of messages and data. Some background information and two BASIC programs that illustrate the algorithms used in a new cryptographic system called "public-key cryptography" are included. (CW)
Astrophysical data analysis with information field theory
NASA Astrophysics Data System (ADS)
Enßlin, Torsten
2014-12-01
Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Image watermarking capacity analysis based on Hopfield neural network
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Hongbin
2004-11-01
In watermarking schemes, watermarking can be viewed as a form of communication problems. Almost all of previous works on image watermarking capacity are based on information theory, using Shannon formula to calculate the capacity of watermarking. In this paper, we present a blind watermarking algorithm using Hopfield neural network, and analyze watermarking capacity based on neural network. In our watermarking algorithm, watermarking capacity is decided by attraction basin of associative memory.
Quantum Information Theory - an Invitation
NASA Astrophysics Data System (ADS)
Werner, Reinhard F.
Quantum information and quantum computers have received a lot of public attention recently. Quantum computers have been advertised as a kind of warp drive for computing, and indeed the promise of the algorithms of Shor and Grover is to perform computations which are extremely hard or even provably impossible on any merely ``classical'' computer.In this article I shall give an account of the basic concepts of quantum information theory is given, staying as much as possible in the area of general agreement.The article is divided into two parts. The first (up to the end of Sect. 2.5) is mostly in plain English, centered around the exploration of what can or cannot be done with quantum systems as information carriers. The second part, Sect. 2.6, then gives a description of the mathematical structures and of some of the tools needed to develop the theory.
NASA Astrophysics Data System (ADS)
Bai, Wei; Yang, Hui; Yu, Ao; Xiao, Hongyun; He, Linkuan; Feng, Lei; Zhang, Jie
2018-01-01
The leakage of confidential information is one of important issues in the network security area. Elastic Optical Networks (EON) as a promising technology in the optical transport network is under threat from eavesdropping attacks. It is a great demand to support confidential information service (CIS) and design efficient security strategy against the eavesdropping attacks. In this paper, we propose a solution to cope with the eavesdropping attacks in routing and spectrum allocation. Firstly, we introduce probability theory to describe eavesdropping issue and achieve awareness of eavesdropping attacks. Then we propose an eavesdropping-aware routing and spectrum allocation (ES-RSA) algorithm to guarantee information security. For further improving security and network performance, we employ multi-flow virtual concatenation (MFVC) and propose an eavesdropping-aware MFVC-based secure routing and spectrum allocation (MES-RSA) algorithm. The presented simulation results show that the proposed two RSA algorithms can both achieve greater security against the eavesdropping attacks and MES-RSA can also improve the network performance efficiently.
NASA Astrophysics Data System (ADS)
Pande, Saket; Sharma, Ashish
2014-05-01
This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.
Adaptive, Distributed Control of Constrained Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
Cafaro, Carlo; Alsing, Paul M
2018-04-01
The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.
NASA Astrophysics Data System (ADS)
Cafaro, Carlo; Alsing, Paul M.
2018-04-01
The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.
Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory
Zhang, Lichuan; Wang, Tonghao; Xu, Demin
2017-01-01
Cooperative localization (CL) is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs). In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD) filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position. PMID:28991191
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
Quantum load balancing in ad hoc networks
NASA Astrophysics Data System (ADS)
Hasanpour, M.; Shariat, S.; Barnaghi, P.; Hoseinitabatabaei, S. A.; Vahid, S.; Tafazolli, R.
2017-06-01
This paper presents a novel approach in targeting load balancing in ad hoc networks utilizing the properties of quantum game theory. This approach benefits from the instantaneous and information-less capability of entangled particles to synchronize the load balancing strategies in ad hoc networks. The quantum load balancing (QLB) algorithm proposed by this work is implemented on top of OLSR as the baseline routing protocol; its performance is analyzed against the baseline OLSR, and considerable gain is reported regarding some of the main QoS metrics such as delay and jitter. Furthermore, it is shown that QLB algorithm supports a solid stability gain in terms of throughput which stands a proof of concept for the load balancing properties of the proposed theory.
Python for Information Theoretic Analysis of Neural Data
Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano
2008-01-01
Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557
Lin, Luan; McKerrow, Wilson H; Richards, Bryce; Phonsom, Chukiat; Lawrence, Charles E
2018-03-05
The nearest neighbor model and associated dynamic programming algorithms allow for the efficient estimation of the RNA secondary structure Boltzmann ensemble. However because a given RNA secondary structure only contains a fraction of the possible helices that could form from a given sequence, the Boltzmann ensemble is multimodal. Several methods exist for clustering structures and finding those modes. However less focus is given to exploring the underlying reasons for this multimodality: the presence of conflicting basepairs. Information theory, or more specifically mutual information, provides a method to identify those basepairs that are key to the secondary structure. To this end we find most informative basepairs and visualize the effect of these basepairs on the secondary structure. Knowing whether a most informative basepair is present tells us not only the status of the particular pair but also provides a large amount of information about which other pairs are present or not present. We find that a few basepairs account for a large amount of the structural uncertainty. The identification of these pairs indicates small changes to sequence or stability that will have a large effect on structure. We provide a novel algorithm that uses mutual information to identify the key basepairs that lead to a multimodal Boltzmann distribution. We then visualize the effect of these pairs on the overall Boltzmann ensemble.
NASA Technical Reports Server (NTRS)
Lenoble, Jacqueline (Editor); Remer, Lorraine (Editor); Tanre, Didier (Editor)
2012-01-01
This book gives a much needed explanation of the basic physical principles of radia5tive transfer and remote sensing, and presents all the instruments and retrieval algorithms in a homogenous manner. For the first time, an easy path from theory to practical algorithms is available in one easily accessible volume, making the connection between theoretical radiative transfer and individual practical solutions to retrieve aerosol information from remote sensing. In addition, the specifics and intercomparison of all current and historical methods are explained and clarified.
Detection of low-contrast images in film-grain noise.
Naderi, F; Sawchuk, A A
1978-09-15
When low contrast photographic images are digitized by a very small aperture, extreme film-grain noise almost completely obliterates the image information. Using a large aperture to average out the noise destroys the fine details of the image. In these situations conventional statistical restoration techniques have little effect, and well chosen heuristic algorithms have yielded better results. In this paper we analyze the noisecheating algorithm of Zweig et al. [J. Opt. Soc. Am. 65, 1347 (1975)] and show that it can be justified by classical maximum-likelihood detection theory. A more general algorithm applicable to a broader class of images is then developed by considering the signal-dependent nature of film-grain noise. Finally, a Bayesian detection algorithm with improved performance is presented.
Fouks, J D; Besnard, S; Signac, L; Meurice, J C; Neau, J P; Paquereau, J
2004-04-01
The present paper exposes algorithmic results providing a vision about sleep functions which complements biological theory and experiments. Derived from the algorithmic theory of information, the theory of adaptation aims at quantifying how an inherited or acquired piece of knowledge helps individuals to survive. It gives a scale of complexity for survival problems and proves that some of them can only be solved by a dynamical management of memory associating continuous learning and forgetting methods. In this paper we explain how a virtual robot "Picota" has been designed to simulate the behavior of a living hen. In order to survive in its synthetical environment, our robot must recognize good seeds from bad ones, and should take rest during night periods. Within this frame, and facing the rapid evolution of to-be-recognized forms, the best way to equilibrate the energetic needs of the robot and ensure survival is to use the nightly rest to reorganize the pieces of data acquired during the daily learning, and to trash the less useful ones. Thanks to this time sharing, the same circuits can be used for both daily learning and nightly forgetting and thus costs are lower; however, this also forces the system to "paralyse" the virtual robot, and therefore the night algorithm is reminiscent of paradoxical (REM) sleep. The algorithm of the robot takes advantage of the alternation between wakefulness or activity and the rest period. This diagram quite accurately recalls the REM period. In the future, the convergence between the neurophysiology of sleep and the theory of complexity may give us a new line of research in order to elucidate sleep functions.
Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints
NASA Astrophysics Data System (ADS)
Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea
2018-05-01
This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.
NASA Astrophysics Data System (ADS)
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-06-06
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.
Devine, Sean D
2016-02-01
Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
An overview of the essential differences and similarities of system identification techniques
NASA Technical Reports Server (NTRS)
Mehra, Raman K.
1991-01-01
Information is given in the form of outlines, graphs, tables and charts. Topics include system identification, Bayesian statistical decision theory, Maximum Likelihood Estimation, identification methods, structural mode identification using a stochastic realization algorithm, and identification results regarding membrane simulations and X-29 flutter flight test data.
A Linear Algebra Measure of Cluster Quality.
ERIC Educational Resources Information Center
Mather, Laura A.
2000-01-01
Discussion of models for information retrieval focuses on an application of linear algebra to text clustering, namely, a metric for measuring cluster quality based on the theory that cluster quality is proportional to the number of terms that are disjoint across the clusters. Explains term-document matrices and clustering algorithms. (Author/LRW)
Traffic Analysis for Network Security using Learning Theory and Streaming Algorithms
2008-09-01
to have had friends who have immensely improved my research and presentation – David Brumley, Hubert Chan, Elena Nabieva, Vyas Sekar, and Runting Shi...Information Assurance and Security 2001, 2001. [15] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. Can machine learning be
ERIC Educational Resources Information Center
van der Linden, Wim J.; Boekkooi-Timminga, Ellen
A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of test construction there may be several…
Ho, Derek; Drake, Tyler K.; Bentley, Rex C.; Valea, Fidel A.; Wax, Adam
2015-01-01
We evaluate a new hybrid algorithm for determining nuclear morphology using angle-resolved low coherence interferometry (a/LCI) measurements in ex vivo cervical tissue. The algorithm combines Mie theory based and continuous wavelet transform inverse light scattering analysis. The hybrid algorithm was validated and compared to traditional Mie theory based analysis using an ex vivo tissue data set. The hybrid algorithm achieved 100% agreement with pathology in distinguishing dysplastic and non-dysplastic biopsy sites in the pilot study. Significantly, the new algorithm performed over four times faster than traditional Mie theory based analysis. PMID:26309741
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Learning Time-Varying Coverage Functions
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624
Learning Time-Varying Coverage Functions.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-12-08
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.
NASA Astrophysics Data System (ADS)
Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong
2014-09-01
In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.
A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings
Magri, Cesare; Whittingstall, Kevin; Singh, Vanessa; Logothetis, Nikos K; Panzeri, Stefano
2009-01-01
Background Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses. Results Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies. Conclusion The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code. PMID:19607698
A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings.
Magri, Cesare; Whittingstall, Kevin; Singh, Vanessa; Logothetis, Nikos K; Panzeri, Stefano
2009-07-16
Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses. Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies. The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code.
Rocchini, Duccio
2009-01-01
Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600
Algorithmic information theory and the hidden variable question
NASA Technical Reports Server (NTRS)
Fuchs, Christopher
1992-01-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
NASA Astrophysics Data System (ADS)
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
Bees Algorithm for Construction of Multiple Test Forms in E-Testing
ERIC Educational Resources Information Center
Songmuang, Pokpong; Ueno, Maomi
2011-01-01
The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…
NASA Technical Reports Server (NTRS)
Lee, H. P.
1977-01-01
The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Integrated identification, modeling and control with applications
NASA Astrophysics Data System (ADS)
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.
ERIC Educational Resources Information Center
Green, Gareth P.; Bean, John C.; Peterson, Dean J.
2013-01-01
Intermediate microeconomics is typically viewed as a theory and tools course that relies on algorithmic problems to help students learn and apply economic theory. However, the authors' assessment research suggests that algorithmic problems by themselves do not encourage students to think about where the theory comes from, why the theory is…
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
A new theory of development: the generation of complexity in ontogenesis.
Barbieri, Marcello
2016-03-13
Today there is a very wide consensus on the idea that embryonic development is the result of a genetic programme and of epigenetic processes. Many models have been proposed in this theoretical framework to account for the various aspects of development, and virtually all of them have one thing in common: they do not acknowledge the presence of organic codes (codes between organic molecules) in ontogenesis. Here it is argued instead that embryonic development is a convergent increase in complexity that necessarily requires organic codes and organic memories, and a few examples of such codes are described. This is the code theory of development, a theory that was originally inspired by an algorithm that is capable of reconstructing structures from incomplete information, an algorithm that here is briefly summarized because it makes it intuitively appealing how a convergent increase in complexity can be achieved. The main thesis of the new theory is that the presence of organic codes in ontogenesis is not only a theoretical necessity but, first and foremost, an idea that can be tested and that has already been found to be in agreement with the evidence. © 2016 The Author(s).
Topography compensation for haptization of a mesh object and its stiffness distribution.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2015-01-01
This work was motivated by the need for perceptualizing nano-scale scientific data, e.g., those acquired by a scanning probe microscope, where collocated topography and stiffness distribution of a surface can be measured. Previous research showed that when the topography of a surface with spatially varying stiffness is rendered using the conventional penalty-based haptic rendering method, the topography perceived by the user could be significantly distorted from its original model. In the worst case, a higher region with a smaller stiffness value can be perceived to be lower than a lower region with a larger stiffness value. This problem was explained by the theory of force constancy: the user tends to maintain an invariant contact force when s/he strokes the surface to perceive its topography. In this paper, we present a haptization algorithm that can render the shape of a mesh surface and its stiffness distribution with high perceptual accuracy. Our algorithm adaptively changes the surface topography on the basis of the force constancy theory to deliver adequate shape information to the user while preserving the stiffness perception. We also evaluated the performance of the proposed haptization algorithm in comparison to the constraint-based algorithm by examining relevant proximal stimuli and carrying out a user experiment. Results demonstrated that our algorithm could improve the perceptual accuracy of shape and reduce the exploration time, thereby leading to more accurate and efficient haptization.
Distributed-Memory Fast Maximal Independent Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less
An on-line modified least-mean-square algorithm for training neurofuzzy controllers.
Tan, Woei Wan
2007-04-01
The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.
Information Superiority via Formal Concept Analysis
NASA Astrophysics Data System (ADS)
Koester, Bjoern; Schmidt, Stefan E.
This chapter will show how to get more mileage out of information. To achieve that, we first start with an introduction to the fundamentals of Formal Concept Analysis (FCA). FCA is a highly versatile field of applied lattice theory, which allows hidden relationships to be uncovered in relational data. Moreover, FCA provides a distinguished supporting framework to subsequently find and fill information gaps in a systematic and rigorous way. In addition, we would like to build bridges via a universal approach to other communities which can be related to FCA in order for other research areas to benefit from a theory that has been elaborated for more than twenty years. Last but not least, the essential benefits of FCA will be presented algorithmically as well as theoretically by investigating a real data set from the MIPT Terrorism Knowledge Base and also by demonstrating an application in the field of Web Information Retrieval and Web Intelligence.
Basis for a neuronal version of Grover's quantum algorithm
Clark, Kevin B.
2014-01-01
Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response regulation choices. PMID:24860419
New Aspects of Probabilistic Forecast Verification Using Information Theory
NASA Astrophysics Data System (ADS)
Tödter, Julian; Ahrens, Bodo
2013-04-01
This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.
Fuzzy connectedness and object definition
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Samarasekera, Supun
1995-04-01
Approaches to object information extraction from images should attempt to use the fact that images are fuzzy. In past image segmentation research, the notion of `hanging togetherness' of image elements specified by their fuzzy connectedness has been lacking. We present a theory of fuzzy objects for n-dimensional digital spaces based on a notion of fuzzy connectedness of image elements. Although our definitions lead to problems of enormous combinatorial complexity, the theoretical results allow us to reduce this dramatically. We demonstrate the utility of the theory and algorithms in image segmentation based on several practical examples.
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
Optimal and robust control of transition
NASA Technical Reports Server (NTRS)
Bewley, T. R.; Agarwal, R.
1996-01-01
Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.
Processing Digital Imagery to Enhance Perceptions of Realism
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2003-01-01
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
ICON: 3D reconstruction with 'missing-information' restoration in biological electron tomography.
Deng, Yuchen; Chen, Yu; Zhang, Yan; Wang, Shengliu; Zhang, Fa; Sun, Fei
2016-07-01
Electron tomography (ET) plays an important role in revealing biological structures, ranging from macromolecular to subcellular scale. Due to limited tilt angles, ET reconstruction always suffers from the 'missing wedge' artifacts, thus severely weakens the further biological interpretation. In this work, we developed an algorithm called Iterative Compressed-sensing Optimized Non-uniform fast Fourier transform reconstruction (ICON) based on the theory of compressed-sensing and the assumption of sparsity of biological specimens. ICON can significantly restore the missing information in comparison with other reconstruction algorithms. More importantly, we used the leave-one-out method to verify the validity of restored information for both simulated and experimental data. The significant improvement in sub-tomogram averaging by ICON indicates its great potential in the future application of high-resolution structural determination of macromolecules in situ. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Quantum Hamiltonian identification from measurement time traces.
Zhang, Jun; Sarovar, Mohan
2014-08-22
Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.
Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.
Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian
2015-03-01
We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.
Learning control system design based on 2-D theory - An application to parallel link manipulator
NASA Technical Reports Server (NTRS)
Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.
1990-01-01
An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.
A Danger-Theory-Based Immune Network Optimization Algorithm
Li, Tao; Xiao, Xin; Shi, Yuanquan
2013-01-01
Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853
Mining Hesitation Information by Vague Association Rules
NASA Astrophysics Data System (ADS)
Lu, An; Ng, Wilfred
In many online shopping applications, such as Amazon and eBay, traditional Association Rule (AR) mining has limitations as it only deals with the items that are sold but ignores the items that are almost sold (for example, those items that are put into the basket but not checked out). We say that those almost sold items carry hesitation information, since customers are hesitating to buy them. The hesitation information of items is valuable knowledge for the design of good selling strategies. However, there is no conceptual model that is able to capture different statuses of hesitation information. Herein, we apply and extend vague set theory in the context of AR mining. We define the concepts of attractiveness and hesitation of an item, which represent the overall information of a customer's intent on an item. Based on the two concepts, we propose the notion of Vague Association Rules (VARs). We devise an efficient algorithm to mine the VARs. Our experiments show that our algorithm is efficient and the VARs capture more specific and richer information than do the traditional ARs.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-01-01
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection. PMID:28587299
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Design of Learning Model of Logic and Algorithms Based on APOS Theory
ERIC Educational Resources Information Center
Hartati, Sulis Janu
2014-01-01
This research questions were "how do the characteristics of learning model of logic & algorithm according to APOS theory" and "whether or not these learning model can improve students learning outcomes". This research was conducted by exploration, and quantitative approach. Exploration used in constructing theory about the…
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
On-the-Fly Machine Learning of Atomic Potential in Density Functional Theory Structure Optimization
NASA Astrophysics Data System (ADS)
Jacobsen, T. L.; Jørgensen, M. S.; Hammer, B.
2018-01-01
Machine learning (ML) is used to derive local stability information for density functional theory calculations of systems in relation to the recently discovered SnO2 (110 )-(4 ×1 ) reconstruction. The ML model is trained on (structure, total energy) relations collected during global minimum energy search runs with an evolutionary algorithm (EA). While being built, the ML model is used to guide the EA, thereby speeding up the overall rate by which the EA succeeds. Inspection of the local atomic potentials emerging from the model further shows chemically intuitive patterns.
Fuzzy geometry, entropy, and image information
NASA Technical Reports Server (NTRS)
Pal, Sankar K.
1991-01-01
Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described.
Pure field theories and MACSYMA algorithms
NASA Technical Reports Server (NTRS)
Ament, W. S.
1977-01-01
A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.
Inverse scattering approach to improving pattern recognition
NASA Astrophysics Data System (ADS)
Chapline, George; Fu, Chi-Yung
2005-05-01
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
A Darwinian approach to control-structure design
NASA Technical Reports Server (NTRS)
Zimmerman, David C.
1993-01-01
Genetic algorithms (GA's), as introduced by Holland (1975), are one form of directed random search. The form of direction is based on Darwin's 'survival of the fittest' theories. GA's are radically different from the more traditional design optimization techniques. GA's work with a coding of the design variables, as opposed to working with the design variables directly. The search is conducted from a population of designs (i.e., from a large number of points in the design space), unlike the traditional algorithms which search from a single design point. The GA requires only objective function information, as opposed to gradient or other auxiliary information. Finally, the GA is based on probabilistic transition rules, as opposed to deterministic rules. These features allow the GA to attack problems with local-global minima, discontinuous design spaces and mixed variable problems, all in a single, consistent framework.
Inverse Scattering Approach to Improving Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G; Fu, C
2005-02-15
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less
NASA Astrophysics Data System (ADS)
Gutowski, Marek W.
1992-12-01
Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
North American Fuzzy Logic Processing Society (NAFIPS 1992), volume 1
NASA Technical Reports Server (NTRS)
Villarreal, James A. (Compiler)
1992-01-01
This document contains papers presented at the NAFIPS '92 North American Fuzzy Information Processing Society Conference. More than 75 papers were presented at this Conference, which was sponsored by NAFIPS in cooperation with NASA, the Instituto Tecnologico de Morelia, the Indian Society for Fuzzy Mathematics and Information Processing (ISFUMIP), the Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), the International Fuzzy Systems Association (IFSA), the Japan Society for Fuzzy Theory and Systems, and the Microelectronics and Computer Technology Corporation (MCC). The fuzzy set theory has led to a large number of diverse applications. Recently, interesting applications have been developed which involve the integration of fuzzy systems with adaptive processes such as neural networks and genetic algorithms. NAFIPS '92 was directed toward the advancement, commercialization, and engineering development of these technologies.
North American Fuzzy Logic Processing Society (NAFIPS 1992), volume 2
NASA Technical Reports Server (NTRS)
Villarreal, James A. (Compiler)
1992-01-01
This document contains papers presented at the NAFIPS '92 North American Fuzzy Information Processing Society Conference. More than 75 papers were presented at this Conference, which was sponsored by NAFIPS in cooperation with NASA, the Instituto Tecnologico de Morelia, the Indian Society for Fuzzy Mathematics and Information Processing (ISFUMIP), the Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), the International Fuzzy Systems Association (IFSA), the Japan Society for Fuzzy Theory and Systems, and the Microelectronics and Computer Technology Corporation (MCC). The fuzzy set theory has led to a large number of diverse applications. Recently, interesting applications have been developed which involve the integration of fuzzy systems with adaptive processes such a neural networks and genetic algorithms. NAFIPS '92 was directed toward the advancement, commercialization, and engineering development of these technologies.
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
Platt, R D; Griggs, R A
1993-08-01
In four experiments with 760 subjects, the present study examined Cosmides' Darwinian algorithm theory of reasoning: specifically, its explanation of facilitation on the Wason selection task. The first experiment replicated Cosmides' finding of facilitation for social contract versions of the selection task, using both her multiple-problem format and a single-problem format. Experiment 2 examined performance on Cosmides' three main social contract problems while manipulating the perspective of the subject and the presence and absence of cost-benefit information. The presence of cost-benefit information improved performance in two of the three problems while the perspective manipulation had no effect. In Experiment 3, the cost-benefit effect was replicated; and performance on one of the three problems was enhanced by the presence of explicit negatives on the NOT-P and NOT-Q cards. Experiment 4 examined the role of the deontic term "must" in the facilitation observed for two of the social contract problems. The presence of "must" led to a significant improvement in performance. The results of these experiments are strongly supportive of social contract theory in that cost-benefit information is necessary for substantial facilitation to be observed in Cosmides' problems. These findings also suggest the presence of other cues that can help guide subjects to a deontic social contract interpretation when the social contract nature of the problem is not clear.
NASA Astrophysics Data System (ADS)
Ghommam, Jawhar; Saad, Maarouf
2014-05-01
In this paper, we investigate new implementable cooperative adaptive backstepping controllers for a group of underactuated autonomous vehicles that are communicating with their local neighbours to track a time-varying virtual leader of which the relative position may only be available to a portion of the team members. At the kinematic cooperative control level of the autonomous underwater vehicle, the virtual cooperative controller is basically designed on a proportional and derivative consensus algorithm presented in Ren (2010), which involves velocity information from local neighbours. In this paper, we propose a new design algorithm based on singular perturbation theory that precludes the use of the neighbours' velocity information in the cooperative design. At the dynamic cooperative control level, calculation of the partial derivatives of some stabilising functions which in turn will contain velocity information from the local neighbours is required. To facilitate the implementation of the cooperative controllers, we propose a command filter approach technique to avoid analytic differentiation of the virtual cooperative control laws. We show how Lyapunov-based techniques and graph theory can be combined together to yield a robust cooperative controller where the uncertain dynamics of the cooperating vehicles and the constraints on the communication topology which contains a directed spanning tree are explicitly taken into account. Simulation results with a dynamic model of underactuated autonomous underwater vehicles moving on the horizontal plane are presented and discussed.
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
NASA Astrophysics Data System (ADS)
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Sheng, Xi
2012-07-01
The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.
Issues of Dynamic Coalition Formation Among Rational Agents
2002-04-01
approaches of forming stable coalitions among rational agents. Issues and problems of dynamic coalition environments are discussed in section 3 while...2.2. 2.1.2 Coalition Algorithm, Coalition Formation Environment and Model Rational agents which are involved in a co-operative game (A,v) are...publicly available simulation environment for coalition formation among rational information agents based on selected classic coalition theories is, for
Wireless, amphibious theory for reinforcement learning
NASA Astrophysics Data System (ADS)
Li, Jinci
2013-10-01
Cryptographers agree that heterogeneous information are an interesting new topic in the field of cryptography, and biologists concur. Given the current status of stochastic epistemologies, security experts clearly desire the construction of flip-flop gates [1, 2, 3]. Mungo, our new system for authenticated algorithms, is the solution to all of these challenges. Though such a hypothesis at first glance seems perverse, it has ample historical precedence.
A Holoinformational Model of the Physical Observer
NASA Astrophysics Data System (ADS)
di Biase, Francisco
2013-09-01
The author proposes a holoinformational view of the observer based, on the holonomic theory of brain/mind function and quantum brain dynamics developed by Karl Pribram, Sir John Eccles, R.L. Amoroso, Hameroff, Jibu and Yasue, and in the quantumholographic and holomovement theory of David Bohm. This conceptual framework is integrated with nonlocal information properties of the Quantum Field Theory of Umesawa, with the concept of negentropy, order, and organization developed by Shannon, Wiener, Szilard and Brillouin, and to the theories of self-organization and complexity of Prigogine, Atlan, Jantsch and Kauffman. Wheeler's "it from bit" concept of a participatory universe, and the developments of the physics of information made by Zureck and others with the concepts of statistical entropy and algorithmic entropy, related to the number of bits being processed in the mind of the observer are also considered. This new synthesis gives a self-organizing quantum nonlocal informational basis for a new model of awareness in a participatory universe. In this synthesis, awareness is conceived as meaningful quantum nonlocal information interconnecting the brain and the cosmos, by a holoinformational unified field (integrating nonlocal holistic (quantum) and local (Newtonian). We propose that the cosmology of the physical observer is this unified nonlocal quantum-holographic cosmos manifesting itself through awareness, interconnected in a participatory holistic and indivisible way the human mind-brain to all levels of the self-organizing holographic anthropic multiverse.
Dieguez, Sebastian; Wagner-Egger, Pascal; Gauvrit, Nicolas
2015-11-01
Belief in conspiracy theories has often been associated with a biased perception of randomness, akin to a nothing-happens-by-accident heuristic. Indeed, a low prior for randomness (i.e., believing that randomness is a priori unlikely) could plausibly explain the tendency to believe that a planned deception lies behind many events, as well as the tendency to perceive meaningful information in scattered and irrelevant details; both of these tendencies are traits diagnostic of conspiracist ideation. In three studies, we investigated this hypothesis and failed to find the predicted association between low prior for randomness and conspiracist ideation, even when randomness was explicitly opposed to malevolent human intervention. Conspiracy believers' and nonbelievers' perceptions of randomness were not only indistinguishable from each other but also accurate compared with the normative view arising from the algorithmic information framework. Thus, the motto "nothing happens by accident," taken at face value, does not explain belief in conspiracy theories. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
Genetic Networks and Anticipation of Gene Expression Patterns
NASA Astrophysics Data System (ADS)
Gebert, J.; Lätsch, M.; Pickl, S. W.; Radde, N.; Weber, G.-W.; Wünschiers, R.
2004-08-01
An interesting problem for computational biology is the analysis of time-series expression data. Here, the application of modern methods from dynamical systems, optimization theory, numerical algorithms and the utilization of implicit discrete information lead to a deeper understanding. In [1], we suggested to represent the behavior of time-series gene expression patterns by a system of ordinary differential equations, which we analytically and algorithmically investigated under the parametrical aspect of stability or instability. Our algorithm strongly exploited combinatorial information. In this paper, we deepen, extend and exemplify this study from the viewpoint of underlying mathematical modelling. This modelling consists in evaluating DNA-microarray measurements as the basis of anticipatory prediction, in the choice of a smooth model given by differential equations, in an approach of the right-hand side with parametric matrices, and in a discrete approximation which is a least squares optimization problem. We give a mathematical and biological discussion, and pay attention to the special case of a linear system, where the matrices do not depend on the state of expressions. Here, we present first numerical examples.
1982-10-01
and time-to-go (T60) are provided from the Estimation Algorithm. The gimbal angle commands used in the first two phases are applied to the gimbal...lighting techniques are also used to simplify image understanding or to extract additional information about position, range, or shape of objects in the...motion or firing dis- turbances. Since useful muzzle position and rate information is difficult to obtain, conventional feedback techniques 447 cannot
Representations of Complexity: How Nature Appears in Our Theories
2013-01-01
In science we study processes in the material world. The way these processes operate can be discovered by conducting experiments that activate them, and findings from such experiments can lead to functional complexity theories of how the material processes work. The results of a good functional theory will agree with experimental measurements, but the theory may not incorporate in its algorithmic workings a representation of the material processes themselves. Nevertheless, the algorithmic operation of a good functional theory may be said to make contact with material reality by incorporating the emergent computations the material processes carry out. These points are illustrated in the experimental analysis of behavior by considering an evolutionary theory of behavior dynamics, the algorithmic operation of which does not correspond to material features of the physical world, but the functional output of which agrees quantitatively and qualitatively with findings from a large body of research with live organisms. PMID:28018044
NASA Astrophysics Data System (ADS)
Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.
2018-06-01
Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.
Control Theory based Shape Design for the Incompressible Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Cowles, G.; Martinelli, L.
2003-12-01
A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.
Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment
NASA Astrophysics Data System (ADS)
Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter
2017-07-01
Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Evolutionary pattern search algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
Improving resolution of crosswell seismic section based on time-frequency analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, H.; Li, Y.
1994-12-31
According to signal theory, to improve resolution of seismic section is to extend high-frequency band of seismic signal. In cross-well section, sonic log can be regarded as a reliable source providing high-frequency information to the trace near the borehole. In such case, what to do is to introduce this high-frequency information into the whole section. However, neither traditional deconvolution algorithms nor some new inversion methods such as BCI (Broad Constraint Inversion) are satisfied because of high-frequency noise and nonuniqueness of inversion results respectively. To overcome their disadvantages, this paper presents a new algorithm based on Time-Frequency Analysis (TFA) technology whichmore » has been increasingly received much attention as an useful signal analysis too. Practical applications show that the new method is a stable scheme to improve resolution of cross-well seismic section greatly without decreasing Signal to Noise Ratio (SNR).« less
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
Possibility expectation and its decision making algorithm
NASA Technical Reports Server (NTRS)
Keller, James M.; Yan, Bolin
1992-01-01
The fuzzy integral has been shown to be an effective tool for the aggregation of evidence in decision making. Of primary importance in the development of a fuzzy integral pattern recognition algorithm is the choice (construction) of the measure which embodies the importance of subsets of sources of evidence. Sugeno fuzzy measures have received the most attention due to the recursive nature of the fabrication of the measure on nested sequences of subsets. Possibility measures exhibit an even simpler generation capability, but usually require that one of the sources of information possess complete credibility. In real applications, such normalization may not be possible, or even desirable. In this report, both the theory and a decision making algorithm for a variation of the fuzzy integral are presented. This integral is based on a possibility measure where it is not required that the measure of the universe be unity. A training algorithm for the possibility densities in a pattern recognition application is also presented with the results demonstrated on the shuttle-earth-space training and testing images.
Deformation and Breakup of a Stretching Liquid Bridge
NASA Astrophysics Data System (ADS)
Franses, Elias I.; Liao, Ying-Chih; Basaran, Osman
2004-11-01
Surfactants are routinely used to control the breakup of drops and jets in applications as diverse as ink jet printing, crop spraying, and microarraying. While highly accurate algorithms for studying the breakup of surfactant-free drops and jets are well documented and a great deal of information is now available in such situations, little is known about the closely related problem of interface rupture when surfactant effects cannot be neglected. Here we analyze the deformation and breakup of a stretching liquid bridge whose surface is covered with an insoluble surfactant monolayer by means of a two-dimensional (2-d) finite element algorithm using elliptic mesh generation. That the predictions made with the 2-d algorithm are faithful to the physics is confirmed by demonstrating that the computed results accord well with our new high-speed visualization experiments and existing scaling theories. Comparisons are also made to computations made with a one-dimensional (1-d) algorithm based on the slender-jet equations.
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
Pre-Mrna Introns as a Model for Cryptographic Algorithm:. Theory and Experiments
NASA Astrophysics Data System (ADS)
Regoli, Massimo
2010-01-01
The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. In particular the RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions", are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by Biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behaviour in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.
Towards a general theory of neural computation based on prediction by single neurons.
Fiorillo, Christopher D
2008-10-01
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.
Numerical algebraic geometry: a new perspective on gauge and string theories
NASA Astrophysics Data System (ADS)
Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.
2012-07-01
There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murokh, A.; Pellegrini, C.; Rosenzweig, J.
The VISA (Visible to Infrared SASE Amplifier) project is designed to be a SASE-FEL driven to saturation in the sub-micron wavelength region. Its goal is to test various aspects of the existing theory of Self-Amplified Spontaneous Emission, as well as numerical codes. Measurements include: angular and spectral distribution of the FEL light at the exit and inside of the undulator; electron beam micro-bunching using CTR; single-shot time resolved measurements of the pulse profile, using auto-correlation technique and FROG algorithm. The diagnostics are designed to provide maximum information on the physics of the SASE-FEL process, to ensure a close comparison ofmore » the experimental results with theory and simulations.« less
The application of machine vision in fire protection system
NASA Astrophysics Data System (ADS)
Rong, Jiang
2018-04-01
Based on the previous research, this paper introduces the theory of wavelet, collects the situation through the video system, and calculates the key information needed in the fire protection system. That is, through the algorithm to collect the information, according to the flame color characteristics and smoke characteristics were extracted, and as the characteristic information corresponding processing. Alarm system set the corresponding alarm threshold, when more than this alarm threshold, the system will alarm. This combination of flame color characteristics and smoke characteristics of the fire method not only improve the accuracy of judgment, but also improve the efficiency of judgments. Experiments show that the scheme is feasible.
On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm
NASA Astrophysics Data System (ADS)
Budiman, M. A.; Rachmawati, D.
2017-12-01
The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.
The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)
NASA Astrophysics Data System (ADS)
2017-09-01
The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary
A Comparison of Approaches for Solving Hard Graph-Theoretic Problems
2015-05-01
collaborative effort “ Adiabatic Quantum Computing Applications Research” (14-RI-CRADA-02) between the Information Directorate and Lock- 3 Algorithm 3...using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using satisfiability modulo theory (SMT) and corresponding SMT...methods are explored and consist of a parallel computing approach using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
Plenoptic layer-based modeling for image based rendering.
Pearson, James; Brookes, Mike; Dragotti, Pier Luigi
2013-09-01
Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.
NASA Astrophysics Data System (ADS)
Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene
2018-03-01
In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loparo, Kenneth; Kolacinski, Richard; Threeanaew, Wanchat
A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the currentmore » architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.« less
A comparison of SAR ATR performance with information theoretic predictions
NASA Astrophysics Data System (ADS)
Blacknell, David
2003-09-01
Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.
Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath
2009-08-01
An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.
Xu, Jin; Xu, Zhao-Xia; Lu, Ping; Guo, Rui; Yan, Hai-Xia; Xu, Wen-Jie; Wang, Yi-Qin; Xia, Chun-Ming
2016-11-01
To develop an effective Chinese Medicine (CM) diagnostic model of coronary heart disease (CHD) and to confifirm the scientifific validity of CM theoretical basis from an algorithmic viewpoint. Four types of objective diagnostic data were collected from 835 CHD patients by using a self-developed CM inquiry scale for the diagnosis of heart problems, a tongue diagnosis instrument, a ZBOX-I pulse digital collection instrument, and the sound of an attending acquisition system. These diagnostic data was analyzed and a CM diagnostic model was established using a multi-label learning algorithm (REAL). REAL was employed to establish a Xin (Heart) qi defificiency, Xin yang defificiency, Xin yin defificiency, blood stasis, and phlegm fifive-card CM diagnostic model, which had recognition rates of 80.32%, 89.77%, 84.93%, 85.37%, and 69.90%, respectively. The multi-label learning method established using four diagnostic models based on mutual information feature selection yielded good recognition results. The characteristic model parameters were selected by maximizing the mutual information for each card type. The four diagnostic methods used to obtain information in CM, i.e., observation, auscultation and olfaction, inquiry, and pulse diagnosis, can be characterized by these parameters, which is consistent with CM theory.
A Family of Algorithms for Computing Consensus about Node State from Network Data
Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.
2013-01-01
Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the collective opinions of others to make decisions. PMID:23874167
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Quantum cellular automata and free quantum field theory
NASA Astrophysics Data System (ADS)
D'Ariano, Giacomo Mauro; Perinotti, Paolo
2017-02-01
In a series of recent papers [1-4] it has been shown how free quantum field theory can be derived without using mechanical primitives (including space-time, special relativity, quantization rules, etc.), but only considering the easiest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the simple principles of unitarity, homogeneity, locality, and isotropy. This has opened the route to extending the axiomatic information-theoretic derivation of the quantum theory of abstract systems [5, 6] to include quantum field theory. The inherent discrete nature of the informational axiomatization leads to an extension of quantum field theory to a quantum cellular automata theory, where the usual field theory is recovered in a regime where the discrete structure of the automata cannot be probed. A simple heuristic argument sets the scale of discreteness to the Planck scale, and the customary physical regime where discreteness is not visible is the relativistic one of small wavevectors. In this paper we provide a thorough derivation from principles that in the most general case the graph of the quantum cellular automaton is the Cayley graph of a finitely presented group, and showing how for the case corresponding to Euclidean emergent space (where the group resorts to an Abelian one) the automata leads to Weyl, Dirac and Maxwell field dynamics in the relativistic limit. We conclude with some perspectives towards the more general scenario of non-linear automata for interacting quantum field theory.
Nonsmooth Optimization Algorithms, System Theory, and Software Tools
1993-04-13
Optimization Algorithms, System Theory , and Scftware Tools" AFOSR-90-OO68 L AUTHOR($) Elijah Polak -Professor and Principal Investigator 7. PERFORMING...NSN 754Q-01-2W0-S500 Standard Form 295 (69O104 Draft) F’wsa*W by hA Sit 230.1""V AFOSR-90-0068 NONSMO0 TH OPTIMIZA TION A L GORI THMS, SYSTEM THEORY , AND
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis
1988-01-01
The applicability of H infinity control theory to the problems of large space structures (LSS) control was investigated. A complete evaluation to any technique as a candidate for large space structure control involves analytical evaluation, algorithmic evaluation, evaluation via simulation studies, and experimental evaluation. The results of analytical and algorithmic evaluations are documented. The analytical evaluation involves the determination of the appropriateness of the underlying assumptions inherent in the H infinity theory, the determination of the capability of the H infinity theory to achieve the design goals likely to be imposed on an LSS control design, and the identification of any LSS specific simplifications or complications of the theory. The resuls of the analytical evaluation are presented in the form of a tutorial on the subject of H infinity control theory with the LSS control designer in mind. The algorthmic evaluation of H infinity for LSS control pertains to the identification of general, high level algorithms for effecting the application of H infinity to LSS control problems, the identification of specific, numerically reliable algorithms necessary for a computer implementation of the general algorithms, the recommendation of a flexible software system for implementing the H infinity design steps, and ultimately the actual development of the necessary computer codes. Finally, the state of the art in H infinity applications is summarized with a brief outline of the most promising areas of current research.
Selectionist and Evolutionary Approaches to Brain Function: A Critical Appraisal
Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil
2012-01-01
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price’s covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise. PMID:22557963
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Fast half-sibling population reconstruction: theory and algorithms.
Dexter, Daniel; Brown, Daniel G
2013-07-12
Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.
The NIFTy way of Bayesian signal inference
NASA Astrophysics Data System (ADS)
Selig, Marco
2014-12-01
We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
NASA Astrophysics Data System (ADS)
Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele
2018-05-01
A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.
A taste of individualized medicine: physicians’ reactions to automated genetic interpretations
Lærum, Hallvard; Bremer, Sara; Bergan, Stein; Grünfeld, Thomas
2014-01-01
The potential of pharmacogenomics is well documented, and functionality exploiting this knowledge is about to be introduced into electronic medical records. To explore physicians’ reactions to automatic interpretations of genetic tests, we built a prototype with a simple interpretive algorithm. The algorithm was adapted to the needs of physicians handling immunosuppressive treatment during organ transplantation. Nine physicians were observed expressing their thoughts while using the prototype for two patient scenarios. The computer screen and audio were recorded, and the qualitative results triangulated with responses to a survey instrument. The physicians’ reactions to the prototype were very positive; they clearly trusted the results and the theory behind them. The explanation of the algorithm was prominently placed in the user interface for transparency, although this design led to considerable confusion. Background information and references should be available, but considerably less prominent than the result and recommendation. PMID:24001515
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...
A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
A general theory known as the Waste Reduction (WAR) Algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. The theory defines indexes that characterize the generation and the output of potential environm...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visco, Donald Patrick, Jr.; Faulon, Jean-Loup Michel; Roe, Diana C.
This report is a comprehensive review of the field of molecular enumeration from early isomer counting theories to evolutionary algorithms that design molecules in silico. The core of the review is a detail account on how molecules are counted, enumerated, and sampled. The practical applications of molecular enumeration are also reviewed for chemical information, structure elucidation, molecular design, and combinatorial library design purposes. This review is to appear as a chapter in Reviews in Computational Chemistry volume 21 edited by Kenny B. Lipkowitz.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
Design and Implementation of Multi-Input Adaptive Signal Extractions.
1982-09-01
deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A
Placing User-Generated Content on the Map with Confidence
2014-11-03
Terms Theory,Algorithms Keywords Geographic information retrieval, Geolocation 1. INTRODUCTION We describe a method that places on the map short text...we collected using twitter4j, a Java library for the Twitter API . After filtering, there were 44,289 documents in the Twitter test set We evaluate how...Baldwin. Text-based twitter user geolocation prediction. J. Artif. Intell. Res.(JAIR), 49:451–500, 2014. [4] C. Hauff, B. Thomee, and M. Trevisiol
NASA Astrophysics Data System (ADS)
Youn, J.; Kim, T.
2016-06-01
Visualization of disaster dispersion prediction enables decision makers and civilian to prepare disaster and to reduce the damage by showing the realistic simulation results. With advances of GIS technology and the theory of volcanic disaster prediction algorithm, the predicted disaster dispersions are displayed in spatial information. However, most of volcanic ash dispersion predictions are displayed in 2D. 2D visualization has a limitation to understand the realistic dispersion prediction since its height could be presented only by colour. Especially for volcanic ash, 3D visualization of dispersion prediction is essential since it could bring out big aircraft accident. In this paper, we deals with 3D visualization techniques of volcanic ash dispersion prediction with spatial information open platform in Korea. First, time-series volcanic ash 3D position and concentrations are calculated with WRF (Weather Research and Forecasting) model and Modified Fall3D algorithm. For 3D visualization, we propose three techniques; those are 'Cube in the air', 'Cube in the cube', and 'Semi-transparent plane in the air' methods. In the 'Cube in the Air', which locates the semitransparent cubes having different color depends on its particle concentration. Big cube is not realistic when it is zoomed. Therefore, cube is divided into small cube with Octree algorithm. That is 'Cube in the Cube' algorithm. For more realistic visualization, we apply 'Semi-transparent Volcanic Ash Plane' which shows the ash as fog. The results are displayed in the 'V-world' which is a spatial information open platform implemented by Korean government. Proposed techniques were adopted in Volcanic Disaster Response System implemented by Korean Ministry of Public Safety and Security.
Analyzing milestoning networks for molecular kinetics: definitions, algorithms, and examples.
Viswanath, Shruthi; Kreuzer, Steven M; Cardenas, Alfredo E; Elber, Ron
2013-11-07
Network representations are becoming increasingly popular for analyzing kinetic data from techniques like Milestoning, Markov State Models, and Transition Path Theory. Mapping continuous phase space trajectories into a relatively small number of discrete states helps in visualization of the data and in dissecting complex dynamics to concrete mechanisms. However, not only are molecular networks derived from molecular dynamics simulations growing in number, they are also getting increasingly complex, owing partly to the growth in computer power that allows us to generate longer and better converged trajectories. The increased complexity of the networks makes simple interpretation and qualitative insight of the molecular systems more difficult to achieve. In this paper, we focus on various network representations of kinetic data and algorithms to identify important edges and pathways in these networks. The kinetic data can be local and partial (such as the value of rate coefficients between states) or an exact solution to kinetic equations for the entire system (such as the stationary flux between vertices). In particular, we focus on the Milestoning method that provides fluxes as the main output. We proposed Global Maximum Weight Pathways as a useful tool for analyzing molecular mechanism in Milestoning networks. A closely related definition was made in the context of Transition Path Theory. We consider three algorithms to find Global Maximum Weight Pathways: Recursive Dijkstra's, Edge-Elimination, and Edge-List Bisection. The asymptotic efficiency of the algorithms is analyzed and numerical tests on finite networks show that Edge-List Bisection and Recursive Dijkstra's algorithms are most efficient for sparse and dense networks, respectively. Pathways are illustrated for two examples: helix unfolding and membrane permeation. Finally, we illustrate that networks based on local kinetic information can lead to incorrect interpretation of molecular mechanisms.
Game theory-based visual tracking approach focusing on color and texture features.
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Chen, Chuanhua; Wang, Xin
2017-07-20
It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.
Sequential visibility-graph motifs
NASA Astrophysics Data System (ADS)
Iacovacci, Jacopo; Lacasa, Lucas
2016-04-01
Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between nonlinear dynamics and network science. In this work we introduce and study the concept of sequential visibility-graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies. We develop a theory to compute in an exact way the motif profiles associated with general classes of deterministic and stochastic dynamics. We find that this simple property is indeed a highly informative and computationally efficient feature capable of distinguishing among different dynamics and robust against noise contamination. We finally confirm that it can be used in practice to perform unsupervised learning, by extracting motif profiles from experimental heart-rate series and being able, accordingly, to disentangle meditative from other relaxation states. Applications of this general theory include the automatic classification and description of physical, biological, and financial time series.
The Structure-Mapping Engine: Algorithm and Examples.
ERIC Educational Resources Information Center
Falkenhainer, Brian; And Others
This description of the Structure-Mapping Engine (SME), a flexible, cognitive simulation program for studying analogical processing which is based on Gentner's Structure-Mapping theory of analogy, points out that the SME provides a "tool kit" for constructing matching algorithms consistent with this theory. This report provides: (1) a…
The solution of the sixth Hilbert problem: the ultimate Galilean revolution
NASA Astrophysics Data System (ADS)
D'Ariano, Giacomo Mauro
2018-04-01
I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: `physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as `clock', `rigid rod', `force', `inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory. This article is part of the theme issue `Hilbert's sixth problem'.
The solution of the sixth Hilbert problem: the ultimate Galilean revolution.
D'Ariano, Giacomo Mauro
2018-04-28
I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: 'physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as 'clock', 'rigid rod', 'force', 'inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
NASA Astrophysics Data System (ADS)
Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore
2017-12-01
In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.
Knowledge discovery through games and game theory
NASA Astrophysics Data System (ADS)
Smith, James F., III; Rhyne, Robert D.
2001-03-01
A fuzzy logic based expert system has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar platforms. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. The initial version of the algorithm was optimized using a genetic algorithm employing fitness functions constructed based on expertise. A new approach is being explored that involves embedding the resource manager in a electronic game environment. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the database as required. The game allows easy evaluation of the information mined in the second step. The measure of effectiveness (MOE) for re-optimization is discussed. The mined information is extremely valuable as shown through demanding scenarios.
Spectral-spatial classification of hyperspectral imagery with cooperative game
NASA Astrophysics Data System (ADS)
Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei
2018-01-01
Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
Bong Seok Park; Jin Bae Park; Yoon Ho Choi
2011-08-01
We present a leader-follower-based adaptive formation control method for electrically driven nonholonomic mobile robots with limited information. First, an adaptive observer is developed under the condition that the velocity measurement is not available. With the proposed adaptive observer, the formation control part is designed to achieve the desired formation and guarantee the collision avoidance. In addition, neural network is employed to compensate the actuator saturation, and the projection algorithm is used to estimate the velocity information of the leader. It is shown, by using the Lyapunov theory, that all errors of the closed-loop system are uniformly ultimately bounded. Simulation results are presented to illustrate the performance of the proposed control system.
Survey on nonlocal games and operator space theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazuelos, Carlos, E-mail: cpalazue@mat.ucm.es; Vidick, Thomas, E-mail: vidick@cms.caltech.edu
This review article is concerned with a recently uncovered connection between operator spaces, a noncommutative extension of Banach spaces, and quantum nonlocality, a striking phenomenon which underlies many of the applications of quantum mechanics to information theory, cryptography, and algorithms. Using the framework of nonlocal games, we relate measures of the nonlocality of quantum mechanics to certain norms in the Banach and operator space categories. We survey recent results that exploit this connection to derive large violations of Bell inequalities, study the complexity of the classical and quantum values of games and their relation to Grothendieck inequalities, and quantify themore » nonlocality of different classes of entangled states.« less
An Examination of Application of Artificial Neural Network in Cognitive Radios
NASA Astrophysics Data System (ADS)
Bello Salau, H.; Onwuka, E. N.; Aibinu, A. M.
2013-12-01
Recent advancement in software radio technology has led to the development of smart device known as cognitive radio. This type of radio fuses powerful techniques taken from artificial intelligence, game theory, wideband/multiple antenna techniques, information theory and statistical signal processing to create an outstanding dynamic behavior. This cognitive radio is utilized in achieving diverse set of applications such as spectrum sensing, radio parameter adaptation and signal classification. This paper contributes by reviewing different cognitive radio implementation that uses artificial intelligence such as the hidden markov models, metaheuristic algorithm and artificial neural networks (ANNs). Furthermore, different areas of application of ANNs and their performance metrics based approach are also examined.
Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang
2018-03-12
Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.
Dynamics of information diffusion and its applications on complex networks
NASA Astrophysics Data System (ADS)
Zhang, Zi-Ke; Liu, Chuang; Zhan, Xiu-Xiu; Lu, Xin; Zhang, Chu-Xu; Zhang, Yi-Cheng
2016-09-01
The ongoing rapid expansion of the Word Wide Web (WWW) greatly increases the information of effective transmission from heterogeneous individuals to various systems. Extensive research for information diffusion is introduced by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and empirical studies, unification and comparison of different theories and approaches are lacking, which impedes further advances. In this article, we review recent developments in information diffusion and discuss the major challenges. We compare and evaluate available models and algorithms to respectively investigate their physical roles and optimization designs. Potential impacts and future directions are discussed. We emphasize that information diffusion has great scientific depth and combines diverse research fields which makes it interesting for physicists as well as interdisciplinary researchers.
Thought waves remotely affect the performance (output voltage) of photoelectric cells
NASA Astrophysics Data System (ADS)
Cao, Dayong; Cao, Daqing
2012-02-01
In our experiments, thought waves have been shown to be capable of changing (affecting) the output voltage of photovoltaic cells located from as far away as 1-3 meters. There are no wires between brain and photoelectric cell and so it is presumed only the thought waves act on the photoelectric cell. In continual rotations, the experiments tested different solar cells, measuring devices and lamps, and the experiments were done in different labs. The first experiment was conducted on Oct 2002. Tests are ongoing. Conclusions and assumptions include: 1) the slow thought wave has the energy of space-time as defined by C1.00007: The mass, energy, space and time systemic theory- MEST. Every process releases a field effect electrical vibration which be transmitted and focussed in particular paths; 2) the thought wave has the information of the order of tester; 3) the brain (with the physical system of MEST) and consciousness (with the spirit system of the mind, consciousness, emotion and desire-MECD) can produce the information (a part of them as the Genetic code); 4) through some algorithms such as ACO Ant Colony Optimization and EA Evolutionary Algorithm (or genetic algorithm) working in RAM, human can optimize the information. This Optimizational function is the intelligence; 5) In our experiments, not only can thought waves affect the voltage of the output photoelectric signals by its energy, but they can also selectively increase or decrease those photoelectric currents through remote consciousness interface and a conscious-brain information technology.
Spot the match – wildlife photo-identification using information theory
Speed, Conrad W; Meekan, Mark G; Bradshaw, Corey JA
2007-01-01
Background Effective approaches for the management and conservation of wildlife populations require a sound knowledge of population demographics, and this is often only possible through mark-recapture studies. We applied an automated spot-recognition program (I3S) for matching natural markings of wildlife that is based on a novel information-theoretic approach to incorporate matching uncertainty. Using a photo-identification database of whale sharks (Rhincodon typus) as an example case, the information criterion (IC) algorithm we developed resulted in a parsimonious ranking of potential matches of individuals in an image library. Automated matches were compared to manual-matching results to test the performance of the software and algorithm. Results Validation of matched and non-matched images provided a threshold IC weight (approximately 0.2) below which match certainty was not assured. Most images tested were assigned correctly; however, scores for the by-eye comparison were lower than expected, possibly due to the low sample size. The effect of increasing horizontal angle of sharks in images reduced matching likelihood considerably. There was a negative linear relationship between the number of matching spot pairs and matching score, but this relationship disappeared when using the IC algorithm. Conclusion The software and use of easily applied information-theoretic scores of match parsimony provide a reliable and freely available method for individual identification of wildlife, with wide applications and the potential to improve mark-recapture studies without resorting to invasive marking techniques. PMID:17227581
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified 4 metrics that combined to produce a -0.88 correlation with the human disturbance index. When compared to other methods, we find this algorithmic approach resulted in MMIs that were more predictive and comprise fewer metrics.
Adaptive geodesic transform for segmentation of vertebrae on CT images
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang
2014-03-01
Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
Using chemical organization theory for model checking
Kaleta, Christoph; Richter, Stephan; Dittrich, Peter
2009-01-01
Motivation: The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. Results: First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. Availability: All data and a JAVA applet to check SBML-models is available from http://www.minet.uni-jena.de/csb/prj/ot/tools Contact: dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19468053
1983-08-01
ACCESSION NO «• TITLE (and Sublltle) TAILORED TESTING THEORY AND PRACTICE: A BASIC MODEL , NORMAL OGIVE SUBMODELS, AND TAILORED TESTING ALGORITHMS 7...single common-factor model , the author derives the two- and three-parametir normal ogfve il’^irTr^ functions as submodels. For both of these...PAOEfWiwi Dmia Bnfnd) NPRDC TR 83-32 AUGUST 1983 TAILORED TESTING THEORY AND PRACTICE: A BASIC MODEL , NORMAL OGIVE SUBMODELS, AND TAILORED TESTING
FASTSIM2: a second-order accurate frictional rolling contact algorithm
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Wilders, P.
2011-01-01
In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Theory, Design, and Algorithms for Optimal Control of wireless Networks
2010-06-09
The implementation of network-centric warfare technologies is an abiding, critical interest of Air Force Science and Technology efforts for the Warfighter. Wireless communications, strategic signaling are areas of critical Air Force Mission need. Autonomous networks of multiple, heterogeneous Throughput enhancement and robust connectivity in communications and sensor networks are critical factors in net-centric USAF operations. This research directly supports the Air Force vision of information dominance and the development of anywhere, anytime operational readiness.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm
NASA Astrophysics Data System (ADS)
Selig, Marco; Enßlin, Torsten A.
2015-02-01
The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Extracting atmospheric turbulence and aerosol characteristics from passive imagery
NASA Astrophysics Data System (ADS)
Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.
2013-09-01
Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.
Product Distribution Theory for Control of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Greenwood, Nigel J C; Gunton, Jenny E
2014-07-01
This study demonstrated the novel application of a "machine-intelligent" mathematical structure, combining differential game theory and Lyapunov-based control theory, to the artificial pancreas to handle dynamic uncertainties. Realistic type 1 diabetes (T1D) models from the literature were combined into a composite system. Using a mixture of "black box" simulations and actual data from diabetic medical histories, realistic sets of diabetic time series were constructed for blood glucose (BG), interstitial fluid glucose, infused insulin, meal estimates, and sometimes plasma insulin assays. The problem of underdetermined parameters was side stepped by applying a variant of a genetic algorithm to partial information, whereby multiple candidate-personalized models were constructed and then rigorously tested using further data. These formed a "dynamic envelope" of trajectories in state space, where each trajectory was generated by a hypothesis on the hidden T1D system dynamics. This dynamic envelope was then culled to a reduced form to cover observed dynamic behavior. A machine-intelligent autonomous algorithm then implemented game theory to construct real-time insulin infusion strategies, based on the flow of these trajectories through state space and their interactions with hypoglycemic or near-hyperglycemic states. This technique was tested on 2 simulated participants over a total of fifty-five 24-hour days, with no hypoglycemic or hyperglycemic events, despite significant uncertainties from using actual diabetic meal histories with 10-minute warnings. In the main case studies, BG was steered within the desired target set for 99.8% of a 16-hour daily assessment period. Tests confirmed algorithm robustness for ±25% carbohydrate error. For over 99% of the overall 55-day simulation period, either formal controller stability was achieved to the desired target or else the trajectory was within the desired target. These results suggest that this is a stable, high-confidence way to generate closed-loop insulin infusion strategies. © 2014 Diabetes Technology Society.
Solid Rocket Fuel Constitutive Theory and Polymer Cure
NASA Technical Reports Server (NTRS)
Ream, Robert
2006-01-01
Solid Rocket Fuel is a complex composite material for which no general constitutive theory, based on first principles, has been developed. One of the principles such a relation would depend on is the morphology of the binder. A theory of polymer curing is required to determine this morphology. During work on such a theory an algorithm was developed for counting the number of ways a polymer chain could assemble. The methods used to develop and check this algorithm led to an analytic solution to the problem. This solution is used in a probability distribution function which characterizes the morphology of the polymer.
Probabilistic peak detection for first-order chromatographic data.
Lopatka, M; Vivó-Truyols, G; Sjerps, M J
2014-03-19
We present a novel algorithm for probabilistic peak detection in first-order chromatographic data. Unlike conventional methods that deliver a binary answer pertaining to the expected presence or absence of a chromatographic peak, our method calculates the probability of a point being affected by such a peak. The algorithm makes use of chromatographic information (i.e. the expected width of a single peak and the standard deviation of baseline noise). As prior information of the existence of a peak in a chromatographic run, we make use of the statistical overlap theory. We formulate an exhaustive set of mutually exclusive hypotheses concerning presence or absence of different peak configurations. These models are evaluated by fitting a segment of chromatographic data by least-squares. The evaluation of these competing hypotheses can be performed as a Bayesian inferential task. We outline the potential advantages of adopting this approach for peak detection and provide several examples of both improved performance and increased flexibility afforded by our approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Gene Selection and Cancer Classification: A Rough Sets Based Approach
NASA Astrophysics Data System (ADS)
Sun, Lijun; Miao, Duoqian; Zhang, Hongyun
Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.
Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information
NASA Astrophysics Data System (ADS)
Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.
2018-06-01
Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.
Systemic risk and spatiotemporal dynamics of the US housing market.
Meng, Hao; Xie, Wen-Jie; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene
2014-01-13
Housing markets play a crucial role in economies and the collapse of a real-estate bubble usually destabilizes the financial system and causes economic recessions. We investigate the systemic risk and spatiotemporal dynamics of the US housing market (1975-2011) at the state level based on the Random Matrix Theory (RMT). We identify richer economic information in the largest eigenvalues deviating from RMT predictions for the housing market than for stock markets and find that the component signs of the eigenvectors contain either geographical information or the extent of differences in house price growth rates or both. By looking at the evolution of different quantities such as eigenvalues and eigenvectors, we find that the US housing market experienced six different regimes, which is consistent with the evolution of state clusters identified by the box clustering algorithm and the consensus clustering algorithm on the partial correlation matrices. We find that dramatic increases in the systemic risk are usually accompanied by regime shifts, which provide a means of early detection of housing bubbles.
Non-contact detection of cardiac rate based on visible light imaging device
NASA Astrophysics Data System (ADS)
Zhu, Huishi; Zhao, Yuejin; Dong, Liquan
2012-10-01
We have developed a non-contact method to detect human cardiac rate at a distance. This detection is based on the general lighting condition. Using the video signal of human face region captured by webcam, we acquire the cardiac rate based on the PhotoPlethysmoGraphy theory. In this paper, the cardiac rate detecting method is mainly in view of the blood's different absorptivities of the lights various wavelengths. Firstly, we discompose the video signal into RGB three color signal channels and choose the face region as region of interest to take average gray value. Then, we draw three gray-mean curves on each color channel with time as variable. When the imaging device has good fidelity of color, the green channel signal shows the PhotoPlethysmoGraphy information most clearly. But the red and blue channel signals can provide more other physiological information on the account of their light absorptive characteristics of blood. We divide red channel signal by green channel signal to acquire the pulse wave. With the passband from 0.67Hz to 3Hz as a filter of the pulse wave signal and the frequency spectrum superimposed algorithm, we design frequency extracted algorithm to achieve the cardiac rate. Finally, we experiment with 30 volunteers, containing different genders and different ages. The results of the experiments are all relatively agreeable. The difference is about 2bmp. Through the experiment, we deduce that the PhotoPlethysmoGraphy theory based on visible light can also be used to detect other physiological information.
Quantum algorithm for energy matching in hard optimization problems
NASA Astrophysics Data System (ADS)
Baldwin, C. L.; Laumann, C. R.
2018-06-01
We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2018-03-01
This paper draws on the "human reliability" concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice.
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
NASA Astrophysics Data System (ADS)
Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.
2006-10-01
In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
Time-reversal MUSIC imaging of extended targets.
Marengo, Edwin A; Gruber, Fred K; Simonetti, Francesco
2007-08-01
This paper develops, within a general framework that is applicable to rather arbitrary electromagnetic and acoustic remote sensing systems, a theory of time-reversal "MUltiple Signal Classification" (MUSIC)-based imaging of extended (nonpoint-like) scatterers (targets). The general analysis applies to arbitrary remote sensing geometry and sheds light onto how the singular system of the scattering matrix relates to the geometrical and propagation characteristics of the entire transmitter-target-receiver system and how to use this effect for imaging. All the developments are derived within exact scattering theory which includes multiple scattering effects. The derived time-reversal MUSIC methods include both interior sampling, as well as exterior sampling (or enclosure) approaches. For presentation simplicity, particular attention is given to the time-harmonic case where the informational wave modes employed for target interrogation are purely spatial, but the corresponding generalization to broadband fields is also given. This paper includes computer simulations illustrating the derived theory and algorithms.
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin
2017-07-01
Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.
Human Purposive Movement Theory
2012-03-01
theory and provides examples of developmental and operational technologies that could use this theory in common settings. 15. SUBJECT TERMS human ... activity , prediction of behavior, human algorithms purposive movement theory 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18
NASA Astrophysics Data System (ADS)
Danilova, Olga; Semenova, Zinaida
2018-04-01
The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.
Distributed Control by Lagrangian Steepest Descent
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2004-01-01
Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a function o the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian mhimbhgpoint,lowerthe expectated value of the control objective function, and repeat Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
System Engineering of Autonomous Space Vehicles
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis
2014-01-01
Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
NASA Astrophysics Data System (ADS)
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Feasibility of an anticipatory noncontact precrash restraint actuation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercel, S.W.; Dress, W.B.
1995-12-31
The problem of providing an electronic warning of an impending crash to a precrash restraint system a fraction of a second before physical contact differs from more widely explored problems, such as providing several seconds of crash warning to a driver. One approach to precrash restraint sensing is to apply anticipatory system theory. This consists of nested simplified models of the system to be controlled and of the system`s environment. It requires sensory information to describe the ``current state`` of the system and the environment. The models use the sensory data to make a faster-than-real-time prediction about the near future.more » Anticipation theory is well founded but rarely used. A major problem is to extract real-time current-state information from inexpensive sensors. Providing current-state information to the nested models is the weakest element of the system. Therefore, sensors and real-time processing of sensor signals command the most attention in an assessment of system feasibility. This paper describes problem definition, potential ``showstoppers,`` and ways to overcome them. It includes experiments showing that inexpensive radar is a practical sensing element. It considers fast and inexpensive algorithms to extract information from sensor data.« less
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
Anticipated detection of favorable periods for wind energy production by means of information theory
NASA Astrophysics Data System (ADS)
Vogel, Eugenio; Saravia, Gonzalo; Kobe, Sigismund; Schumann, Rolf; Schuster, Rolf
Managing the electric power produced by different sources requires mixing the different response times they present. Thus, for instance, coal burning presents large time lags until operational conditions are reached while hydroelectric generation can react in a matter of some seconds or few minutes to reach the desired productivity. Wind energy production (WEP) can be instantaneously fed to the network to save fuels with low thermal inertia (gas burning for instance), but this source presents sudden variations within few hours. We report here for the first time a method based on information theory to handle WEP. This method has been successful in detecting dynamical changes in magnetic transitions and variations of stock markets. An algorithm called wlzip based on information recognition is used to recognize the information content of a time series. We make use of publically available energy data in Germany to simulate real applications. After a calibration process the system can recognize directly on the WEP data the onset of favorable periods of a desired strength. Optimization can lead to a few hours of anticipation which is enough to control the mixture of WEP with other energy sources, thus saving fuels.
An introduction to the theory of ptychographic phase retrieval methods
NASA Astrophysics Data System (ADS)
Konijnenberg, Sander
2017-12-01
An overview of several ptychographic phase retrieval methods and the theory behind them is presented. By looking into the theory behind more basic single-intensity pattern phase retrieval methods, a theoretical framework is provided for analyzing ptychographic algorithms. Extensions of ptychographic algorithms that deal with issues such as partial coherence, thick samples, or uncertainties of the probe or probe positions are also discussed. This introduction is intended for scientists and students without prior experience in the field of phase retrieval or ptychography to quickly get introduced to the theory, so that they can put the more specialized literature in context more easily.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2013-01-01
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2014-01-01
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian
2016-11-06
The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a "tailgating effect" between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments.
NASA Astrophysics Data System (ADS)
Alexandrov, M. D.; Mishchenko, M. I.
2017-12-01
Accurate aerosol retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. We suggested to address this ill-posedness by flying a bistatic lidar system. Such a system would consist of formation flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and an additional platform hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar. Thus, bistatic lidar observations will be free of deficiencies affecting both monostatic lidar measurements (caused by the highly limited information content) and passive photopolarimetric measurements (caused by vertical integration and surface reflection).We present a preliminary aerosol retrieval algorithm for a bistatic lidar system consisting of a high spectral resolution lidar (HSRL) and an additional receiver flown in formation with it at a scattering angle of 165 degrees. This algorithm was applied to synthetic data generated using Mie-theory computations. The model/retrieval parameters in our tests were the effective radius and variance of the aerosol size distribution, complex refractive index of the particles, and their number concentration. Both mono- and bimodal aerosol mixtures were considered. Our algorithm allowed for definitive evaluation of error propagation from measurements to retrievals using a Monte Carlo technique, which involves random distortion of the observations and statistical characterization of the resulting retrieval errors. Our tests demonstrated that supplementing a conventional monostatic HSRL with an additional receiver dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols.
Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian
2016-01-01
The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a “tailgating effect” between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments. PMID:27827974
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1989-01-01
Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun
2014-01-01
In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399
Decoding communities in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Decoding communities in networks.
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Probing the space of toric quiver theories
NASA Astrophysics Data System (ADS)
Hewlett, Joseph; He, Yang-Hui
2010-03-01
We demonstrate a practical and efficient method for generating toric Calabi-Yau quiver theories, applicable to both D3 and M2 brane world-volume physics. A new analytic method is presented at low order parametres and an algorithm for the general case is developed which has polynomial complexity in the number of edges in the quiver. Using this algorithm, carefully implemented, we classify the quiver diagram and assign possible superpotentials for various small values of the number of edges and nodes. We examine some preliminary statistics on this space of toric quiver theories.
1992-08-01
limits of these topics will be included. Digital SAR processing is for SAR indispensible. Theories and special algorithms will be given along with basic...traitement num~rique est indispensable aux SAP,. Des theories et des algorithmes sp~cifiques; seront proposes, ainsi que des configurations de processeur...equation If N independent pixel values are added than fol- lows from the laws of probability theory that the ra mean value of the sum is identical with
Supercomputing resources empowering superstack with interactive and integrated systems
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2012-09-01
This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.
Search Algorithms as a Framework for the Optimization of Drug Combinations
Coquin, Laurence; Schofield, Jennifer; Feala, Jacob D.; Reed, John C.; McCulloch, Andrew D.; Paternostro, Giovanni
2008-01-01
Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms—originally developed for digital communication—modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6–9 interventions in 80–90% of tests, compared with 15–30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution. PMID:19112483
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Kiremidjian, Anne S.
2011-04-01
This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.
Arnulf, Jan Ketil; Larsen, Kai Rune; Martinsen, Øyvind Lund; Bong, Chih How
2014-01-01
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60–86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain. PMID:25184672
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Arnulf, Jan Ketil; Larsen, Kai Rune; Martinsen, Øyvind Lund; Bong, Chih How
2014-01-01
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60-86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Infrared Spectroscopic Imaging: The Next Generation
Bhargava, Rohit
2013-01-01
Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rishel, Jeremy P.; Keillor, Martin E.; Arrigo, Leah M.
2016-01-01
Atmospheric dispersion theory can be used to predict ground deposition of particulates downwind of a radionuclide release. This paper utilizes standard formulations found in Gaussian plume models to inform the design of an experimental release of short-lived radioactive particles into the atmosphere. Specifically, a source depletion algorithm is used to determine the optimum particle size and release height that maximizes the near-field deposition while minimizing the both the required source activity and the fraction of activity lost to long-distance transport. The purpose of the release is to provide a realistic deposition pattern that might be observed downwind of a small-scalemore » vent from an underground nuclear explosion. The deposition field will be used, in part, to investigate several techniques of gamma radiation survey and spectrometry that could be utilized by an On-Site Inspection team under the verification regime of the Comprehensive Nuclear-Test-Ban Treaty.« less
Avoiding Braess' Paradox Through Collective Intelligence
NASA Technical Reports Server (NTRS)
Wolpert , David H.; Tumer, Kagan
1999-01-01
In an Ideal Shortest Path Algorithm (ISPA), at each moment each router in a network sends all of its traffic down the path that will incur the lowest cost to that traffic. In the limit of an infinitesimally small amount of traffic for a particular router, its routing that traffic via an ISPA is optimal, as far as cost incurred by that traffic is concerned. We demonstrate though that in many cases, due to the side-effects of one router's actions on another routers performance, having routers use ISPA's is suboptimal as far as global aggregate cost is concerned, even when only used to route infinitesimally small amounts of traffic. As a particular example of this we present an instance of Braess' paradox for ISPA'S, in which adding new links to a network decreases overall throughput. We also demonstrate that load-balancing, in which the routing decisions are made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to "side-effects", in this case of current routing decision on future traffic. The theory of COllective INtelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects. We present key concepts from that theory and use them to derive an idealized algorithm whose performance is better than that of the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm avoids Braess' paradox.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
Klaczynski, Paul A.
2014-01-01
In Stanovich's (2009a, 2011) dual-process theory, analytic processing occurs in the algorithmic and reflective minds. Thinking dispositions, indexes of reflective mind functioning, are believed to regulate operations at the algorithmic level, indexed by general cognitive ability. General limitations at the algorithmic level impose constraints on, and affect the adequacy of, specific strategies and abilities (e.g., numeracy). In a study of 216 undergraduates, the hypothesis that thinking dispositions and general ability moderate the relationship between numeracy (understanding of mathematical concepts and attention to numerical information) and normative responses on probabilistic heuristics and biases (HB) problems was tested. Although all three individual difference measures predicted normative responses, the numeracy-normative response association depended on thinking dispositions and general ability. Specifically, numeracy directly affected normative responding only at relatively high levels of thinking dispositions and general ability. At low levels of thinking dispositions, neither general ability nor numeric skills related to normative responses. Discussion focuses on the consistency of these findings with the hypothesis that the implementation of specific skills is constrained by limitations at both the reflective level and the algorithmic level, methodological limitations that prohibit definitive conclusions, and alternative explanations. PMID:25071639
Klaczynski, Paul A
2014-01-01
In Stanovich's (2009a, 2011) dual-process theory, analytic processing occurs in the algorithmic and reflective minds. Thinking dispositions, indexes of reflective mind functioning, are believed to regulate operations at the algorithmic level, indexed by general cognitive ability. General limitations at the algorithmic level impose constraints on, and affect the adequacy of, specific strategies and abilities (e.g., numeracy). In a study of 216 undergraduates, the hypothesis that thinking dispositions and general ability moderate the relationship between numeracy (understanding of mathematical concepts and attention to numerical information) and normative responses on probabilistic heuristics and biases (HB) problems was tested. Although all three individual difference measures predicted normative responses, the numeracy-normative response association depended on thinking dispositions and general ability. Specifically, numeracy directly affected normative responding only at relatively high levels of thinking dispositions and general ability. At low levels of thinking dispositions, neither general ability nor numeric skills related to normative responses. Discussion focuses on the consistency of these findings with the hypothesis that the implementation of specific skills is constrained by limitations at both the reflective level and the algorithmic level, methodological limitations that prohibit definitive conclusions, and alternative explanations.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun
2017-11-29
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek
2017-01-01
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. PMID:29186072
Deng, Yue; Bao, Feng; Yang, Yang; Ji, Xiangyang; Du, Mulong; Zhang, Zhengdong
2017-01-01
Abstract The automated transcript discovery and quantification of high-throughput RNA sequencing (RNA-seq) data are important tasks of next-generation sequencing (NGS) research. However, these tasks are challenging due to the uncertainties that arise in the inference of complete splicing isoform variants from partially observed short reads. Here, we address this problem by explicitly reducing the inherent uncertainties in a biological system caused by missing information. In our approach, the RNA-seq procedure for transforming transcripts into short reads is considered an information transmission process. Consequently, the data uncertainties are substantially reduced by exploiting the information transduction capacity of information theory. The experimental results obtained from the analyses of simulated datasets and RNA-seq datasets from cell lines and tissues demonstrate the advantages of our method over state-of-the-art competitors. Our algorithm is an open-source implementation of MaxInfo. PMID:28911101
[A new information technology for system diagnosis of functional activity of human organs].
Avshalumov, A Sh; Sudakov, K V; Filaretov, G F
2006-01-01
The goal of this work was to consider a new diagnostic technology based on analysis of objective information parameters of functional activity and interaction of normal and pathologically changed human organs. The technology is based on the use of very low power millimeter (EHF) radiation emitted by human body and other biological objects in the process of vital activity. The importance of consideration of the information aspect of vital activity from the standpoint of the theory of functional systems suggested by P. K. Anokhin is emphasized. The suggested information technology is theoretically substantiated. The capabilities of the suggested technology for diagnosis, as well as the difficulties of its practical implementation caused by very low power of electromagnetic fields generated by human body, are discussed. It is noted that only use of modern radiophysical equipment together with new software based on specially developed algorithms made it possible to construct a medical EHF diagnostic system for effective implementation of the suggested technology. The system structure, functions of its components, the examination procedure, and the form of representation of diagnostic information are described together with the specific features of applied software based on the principle of maximal objectivity of analysis and interpretation of the results of diagnosis on the basis of artificial intelligence algorithms. The diagnostic capabilities of the system are illustrated by several examples.
A theory of local learning, the learning channel, and the optimality of backpropagation.
Baldi, Pierre; Sadowski, Peter
2016-11-01
In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ensemble method: Community detection based on game theory
NASA Astrophysics Data System (ADS)
Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.
2014-08-01
Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
Adaptive Peircean decision aid project summary assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senglaub, Michael E.
2007-01-01
This efforts objective was to identify and hybridize a suite of technologies enabling the development of predictive decision aids for use principally in combat environments but also in any complex information terrain. The technologies required included formal concept analysis for knowledge representation and information operations, Peircean reasoning to support hypothesis generation, Mill's's canons to begin defining information operators that support the first two technologies and co-evolutionary game theory to provide the environment/domain to assess predictions from the reasoning engines. The intended application domain is the IED problem because of its inherent evolutionary nature. While a fully functioning integrated algorithm wasmore » not achieved the hybridization and demonstration of the technologies was accomplished and demonstration of utility provided for a number of ancillary queries.« less
Fast algorithms for chiral fermions in 2 dimensions
NASA Astrophysics Data System (ADS)
Hyka (Xhako), Dafina; Osmanaj (Zeqirllari), Rudina
2018-03-01
In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1) lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR) for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
An uncertainty-based distributed fault detection mechanism for wireless sensor networks.
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-04-25
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio.
2016-05-01
Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games Yat Tin...subproblems. Our approach is expected to have wide applications in continuous dynamic games , control theory problems, and elsewhere. Mathematics...differential dynamic games , control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to
Network Data: Statistical Theory and New Models
2016-02-17
SECURITY CLASSIFICATION OF: During this period of review, Bin Yu worked on many thrusts of high-dimensional statistical theory and methodologies. Her...research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...2,7,8,21], sparse modeling (e.g. Lasso) [4,10,11,17,18,19], statistical guarantees for the EM algorithm [3], statistical analysis of algorithm leveraging
Applications and development of new algorithms for displacement analysis using InSAR time series
NASA Astrophysics Data System (ADS)
Osmanoglu, Batuhan
Time series analysis of Synthetic Aperture Radar Interferometry (InSAR) data has become an important scientific tool for monitoring and measuring the displacement of Earth's surface due to a wide range of phenomena, including earthquakes, volcanoes, landslides, changes in ground water levels, and wetlands. Time series analysis is a product of interferometric phase measurements, which become ambiguous when the observed motion is larger than half of the radar wavelength. Thus, phase observations must first be unwrapped in order to obtain physically meaningful results. Persistent Scatterer Interferometry (PSI), Stanford Method for Persistent Scatterers (StaMPS), Short Baselines Interferometry (SBAS) and Small Temporal Baseline Subset (STBAS) algorithms solve for this ambiguity using a series of spatio-temporal unwrapping algorithms and filters. In this dissertation, I improve upon current phase unwrapping algorithms, and apply the PSI method to study subsidence in Mexico City. PSI was used to obtain unwrapped deformation rates in Mexico City (Chapter 3),where ground water withdrawal in excess of natural recharge causes subsurface, clay-rich sediments to compact. This study is based on 23 satellite SAR scenes acquired between January 2004 and July 2006. Time series analysis of the data reveals a maximum line-of-sight subsidence rate of 300mm/yr at a high enough resolution that individual subsidence rates for large buildings can be determined. Differential motion and related structural damage along an elevated metro rail was evident from the results. Comparison of PSI subsidence rates with data from permanent GPS stations indicate root mean square (RMS) agreement of 6.9 mm/yr, about the level expected based on joint data uncertainty. The Mexico City results suggest negligible recharge, implying continuing degradation and loss of the aquifer in the third largest metropolitan area in the world. Chapters 4 and 5 illustrate the link between time series analysis and three-dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform solution, which is becoming an important feature with continuously growing datasets.
Application of ant colony Algorithm and particle swarm optimization in architectural design
NASA Astrophysics Data System (ADS)
Song, Ziyi; Wu, Yunfa; Song, Jianhua
2018-02-01
By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.
Logsdon, Benjamin A.; Mezey, Jason
2010-01-01
Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL), which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1), and genes involved in endocytosis (RCY1), the spindle checkpoint (BUB2), sulfonate catabolism (JLP1), and cell-cell communication (PRM7). Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data. PMID:21152011
Generating Global Leaf Area Index from Landsat: Algorithm Formulation and Demonstration
NASA Technical Reports Server (NTRS)
Ganguly, Sangram; Nemani, Ramakrishna R.; Zhang, Gong; Hashimoto, Hirofumi; Milesi, Cristina; Michaelis, Andrew; Wang, Weile; Votava, Petr; Samanta, Arindam; Melton, Forrest;
2012-01-01
This paper summarizes the implementation of a physically based algorithm for the retrieval of vegetation green Leaf Area Index (LAI) from Landsat surface reflectance data. The algorithm is based on the canopy spectral invariants theory and provides a computationally efficient way of parameterizing the Bidirectional Reflectance Factor (BRF) as a function of spatial resolution and wavelength. LAI retrievals from the application of this algorithm to aggregated Landsat surface reflectances are consistent with those of MODIS for homogeneous sites represented by different herbaceous and forest cover types. Example results illustrating the physics and performance of the algorithm suggest three key factors that influence the LAI retrieval process: 1) the atmospheric correction procedures to estimate surface reflectances; 2) the proximity of Landsatobserved surface reflectance and corresponding reflectances as characterized by the model simulation; and 3) the quality of the input land cover type in accurately delineating pure vegetated components as opposed to mixed pixels. Accounting for these factors, a pilot implementation of the LAI retrieval algorithm was demonstrated for the state of California utilizing the Global Land Survey (GLS) 2005 Landsat data archive. In a separate exercise, the performance of the LAI algorithm over California was evaluated by using the short-wave infrared band in addition to the red and near-infrared bands. Results show that the algorithm, while ingesting the short-wave infrared band, has the ability to delineate open canopies with understory effects and may provide useful information compared to a more traditional two-band retrieval. Future research will involve implementation of this algorithm at continental scales and a validation exercise will be performed in evaluating the accuracy of the 30-m LAI products at several field sites. ©
Physics Without Physics. The Power of Information-theoretical Principles
NASA Astrophysics Data System (ADS)
D'Ariano, Giacomo Mauro
2017-01-01
David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which holds for all energies ever tested (and even much larger), where the usual free quantum field theory is perfectly recovered. In the present quantum discrete theory Einstein relativity principle can be restated without using space-time in terms of invariance of the eigenvalue equation of the automaton/walk under change of representations. Distortions of the Poincaré group emerge at the Planck scale, whereas special relativity is perfectly recovered in the relativistic regime. Discreteness, on the other hand, has some plus compared to the continuum theory: 1) it contains it as a special regime; 2) it leads to some additional features with GR flavor: the existence of an upper bound for the particle mass (with physical interpretation as the Planck mass), and a global De Sitter invariance; 3) it provides its own physical standards for space, time, and mass within a purely mathematical adimensional context. The paper ends with the future perspectives of this project, and with an Appendix containing biographic notes about my friendship with David Finkelstein, to whom this paper is dedicated.
QCCM Center for Quantum Algorithms
2008-10-17
algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006
The implement of Talmud property allocation algorithm based on graphic point-segment way
NASA Astrophysics Data System (ADS)
Cen, Haifeng
2017-04-01
Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.
Classifying 50 years of Bell inequalities
NASA Astrophysics Data System (ADS)
Rosset, Denis; Bancal, Jean-Daniel; Gisin, Nicolas
2014-10-01
Since John S Bell demonstrated the interest of studying linear combinations of probabilities in relation with the EPR paradox in 1964, Bell inequalities have lead to numerous developments. Unfortunately, the description of Bell inequalities is subject to several degeneracies, which make any exchange of information about them unnecessarily hard. Here, we analyze these degeneracies and propose a decomposition for Bell-like inequalities based on a set of reference expressions which is not affected by them. These reference expressions set a common ground for comparing Bell inequalities. We provide algorithms based on finite group theory to compute this decomposition. Implementing these algorithms allows us to set up a compendium of reference Bell-like inequalities, available online at www.faacets.com. This website constitutes a platform where registered Bell-like inequalities can be explored, new inequalities can be compared to previously-known ones and relevant information on Bell inequalities can be added in a collaborative manner. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘50 years of Bell’s theorem’.
3D refractive index measurements of special optical fibers
NASA Astrophysics Data System (ADS)
Yan, Cheng; Huang, Su-Juan; Miao, Zhuang; Chang, Zheng; Zeng, Jun-Zhang; Wang, Ting-Yun
2016-09-01
A digital holographic microscopic chromatography-based approach with considerably improved accuracy, simplified configuration and performance stability is proposed to measure three dimensional refractive index of special optical fibers. Based on the approach, a measurement system is established incorporating a modified Mach-Zehnder interferometer and lab-developed supporting software for data processing. In the system, a phase projection distribution of an optical fiber is utilized to obtain an optimal digital hologram recorded by a CCD, and then an angular spectrum theory-based algorithm is adopted to extract the phase distribution information of an object wave. The rotation of the optic fiber enables the experimental measurements of multi-angle phase information. Based on the filtered back projection algorithm, a 3D refraction index of the optical fiber is thus obtained at high accuracy. To evaluate the proposed approach, both PANDA fibers and special elliptical optical fiber are considered in the system. The results measured in PANDA fibers agree well with those measured using S14 Refractive Index Profiler, which is, however, not suitable for measuring the property of a special elliptical fiber.
Spatial partitioning algorithms for data visualization
NASA Astrophysics Data System (ADS)
Devulapalli, Raghuveer; Quist, Mikael; Carlsson, John Gunnar
2013-12-01
Spatial partitions of an information space are frequently used for data visualization. Weighted Voronoi diagrams are among the most popular ways of dividing a space into partitions. However, the problem of computing such a partition efficiently can be challenging. For example, a natural objective is to select the weights so as to force each Voronoi region to take on a pre-defined area, which might represent the relevance or market share of an informational object. In this paper, we present an easy and fast algorithm to compute these weights of the Voronoi diagrams. Unlike previous approaches whose convergence properties are not well-understood, we give a formulation to the problem based on convex optimization with excellent performance guarantees in theory and practice. We also show how our technique can be used to control the shape of these partitions. More specifically we show how to convert undesirable skinny and long regions into fat regions while maintaining the areas of the partitions. As an application, we use these to visualize the amount of website traffic for the top 101 websites.
Systemic risk and spatiotemporal dynamics of the US housing market
Meng, Hao; Xie, Wen-Jie; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2014-01-01
Housing markets play a crucial role in economies and the collapse of a real-estate bubble usually destabilizes the financial system and causes economic recessions. We investigate the systemic risk and spatiotemporal dynamics of the US housing market (1975–2011) at the state level based on the Random Matrix Theory (RMT). We identify richer economic information in the largest eigenvalues deviating from RMT predictions for the housing market than for stock markets and find that the component signs of the eigenvectors contain either geographical information or the extent of differences in house price growth rates or both. By looking at the evolution of different quantities such as eigenvalues and eigenvectors, we find that the US housing market experienced six different regimes, which is consistent with the evolution of state clusters identified by the box clustering algorithm and the consensus clustering algorithm on the partial correlation matrices. We find that dramatic increases in the systemic risk are usually accompanied by regime shifts, which provide a means of early detection of housing bubbles. PMID:24413626
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Allen Douglas; Rojas, Wudmir Y.; Williams, Adrienne D.
The promise from graphene to produce devices with high mobilities and detectors with fast response times is truncated in practice by strain and deformation originating during growth and subsequent processing. This work describes effects from graphene growth, multiple layer transfer, and substrate termination on out of plane deformation, critical to device performance. Synchrotron spectroscopy data was acquired with a state-of-the-art hyperspectral large-area detector to describe growth and processing with molecular sensitivity at wafer length scales. A study of methodologies used in data analysis discouraged dichroic ratio approaches in favor of orbital vector approximations and data mining algorithms. Orbital vector methodsmore » provide a physical insight into mobility-detrimental rippling by identifying ripple frequency as main actor, rather than intensity; which was confirmed by data mining algorithms, and in good agreement with electron scattering theories of corrugation in graphene. This work paves the way to efficient information from mechanical properties in graphene in a high throughput mode throughout growth and processing in a materials by design approach.« less
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyakh, Dmitry I.
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
NLSE: Parameter-Based Inversion Algorithm
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang
2016-04-01
Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
Induction as Knowledge Integration
NASA Technical Reports Server (NTRS)
Smith, Benjamin D.; Rosenbloom, Paul S.
1996-01-01
Two key issues for induction algorithms are the accuracy of the learned hypothesis and the computational resources consumed in inducing that hypothesis. One of the most promising ways to improve performance along both dimensions is to make use of additional knowledge. Multi-strategy learning algorithms tackle this problem by employing several strategies for handling different kinds of knowledge in different ways. However, integrating knowledge into an induction algorithm can be difficult when the new knowledge differs significantly from the knowledge the algorithm already uses. In many cases the algorithm must be rewritten. This paper presents Knowledge Integration framework for Induction (KII), a KII, that provides a uniform mechanism for integrating knowledge into induction. In theory, arbitrary knowledge can be integrated with this mechanism, but in practice the knowledge representation language determines both the knowledge that can be integrated, and the costs of integration and induction. By instantiating KII with various set representations, algorithms can be generated at different trade-off points along these dimensions. One instantiation of KII, called RS-KII, is presented that can implement hybrid induction algorithms, depending on which knowledge it utilizes. RS-KII is demonstrated to implement AQ-11, as well as a hybrid algorithm that utilizes a domain theory and noisy examples. Other algorithms are also possible.
Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
A discrete Fourier transform for virtual memory machines
NASA Technical Reports Server (NTRS)
Galant, David C.
1992-01-01
An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.
Model-based reconfiguration: Diagnosis and recovery
NASA Technical Reports Server (NTRS)
Crow, Judy; Rushby, John
1994-01-01
We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.
Feedback control for unsteady flow and its application to the stochastic Burgers equation
NASA Technical Reports Server (NTRS)
Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John
1993-01-01
The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Algorithmes et architectures pour ordinateurs quantiques supraconducteurs
NASA Astrophysics Data System (ADS)
Blais, Alexandre
Depuis sa formulation, la theorie de l'information a ete basee, implicitement, sur les lois de la physique classique. Une telle formulation est toutefois incomplete puisqu'elle ne tient pas compte de la realite quantique. Au cours des vingt dernieres annees, l'expansion de la theorie de l'information englobant les effets purement quantiques a connu un interet grandissant. La realisation d'un systeme de traitement de l'information quantique, un ordinateur quantique, presente toutefois de nombreux defis. Dans ce document, on s'interesse a differents aspects concernant ces defis. On commence par presenter des concepts algorithmiques comme l'optimisation de calculs quantiques et le calcul quantique geometrique. Par la suite, on s'interesse au design et a differents aspects de l'utilisation de qubits bases sur les jonctions Josephson. En particulier, un nouveau design de qubit supraconducteur est suggere. On presente aussi une approche originale pour l'interaction entre qubits. Cette approche est tres generale puisqu'elle peut etre appliquee a differents designs de qubits. Finalement, on s'interesse a la lecture des qubits supraconducteurs de flux. Le detecteur suggere ici a l'avantage de pouvoir etre decouple du qubit lorsqu'il n'y a pas de mesure en cours.
Highly-optimized TWSM software package for seismic diffraction modeling adapted for GPU-cluster
NASA Astrophysics Data System (ADS)
Zyatkov, Nikolay; Ayzenberg, Alena; Aizenberg, Arkady
2015-04-01
Oil producing companies concern to increase resolution capability of seismic data for complex oil-and-gas bearing deposits connected with salt domes, basalt traps, reefs, lenses, etc. Known methods of seismic wave theory define shape of hydrocarbon accumulation with nonsufficient resolution, since they do not account for multiple diffractions explicitly. We elaborate alternative seismic wave theory in terms of operators of propagation in layers and reflection-transmission at curved interfaces. Approximation of this theory is realized in the seismic frequency range as the Tip-Wave Superposition Method (TWSM). TWSM based on the operator theory allows to evaluate of wavefield in bounded domains/layers with geometrical shadow zones (in nature it can be: salt domes, basalt traps, reefs, lenses, etc.) accounting for so-called cascade diffraction. Cascade diffraction includes edge waves from sharp edges, creeping waves near concave parts of interfaces, waves of the whispering galleries near convex parts of interfaces, etc. The basic algorithm of TWSM package is based on multiplication of large-size matrices (make hundreds of terabytes in size). We use advanced information technologies for effective realization of numerical procedures of the TWSM. In particular, we actively use NVIDIA CUDA technology and GPU accelerators allowing to significantly improve the performance of the TWSM software package, that is important in using it for direct and inverse problems. The accuracy, stability and efficiency of the algorithm are justified by numerical examples with curved interfaces. TWSM package and its separate components can be used in different modeling tasks such as planning of acquisition systems, physical interpretation of laboratory modeling, modeling of individual waves of different types and in some inverse tasks such as imaging in case of laterally inhomogeneous overburden, AVO inversion.
Research on the lesion segmentation of breast tumor MR images based on FCM-DS theory
NASA Astrophysics Data System (ADS)
Zhang, Liangbin; Ma, Wenjun; Shen, Xing; Li, Yuehua; Zhu, Yuemin; Chen, Li; Zhang, Su
2017-03-01
Magnetic resonance imaging (MRI) plays an important role in the treatment of breast tumor by high intensity focused ultrasound (HIFU). The doctors evaluate the scale, distribution and the statement of benign or malignancy of breast tumor by analyzing variety modalities of MRI, such as the T2, DWI and DCE images for making accurate preoperative treatment plan and evaluating the effect of the operation. This paper presents a method of lesion segmentation of breast tumor based on FCM-DS theory. Fuzzy c-means clustering (FCM) algorithm combined with Dempster-Shafer (DS) theory is used to process the uncertainty of information, segmenting the lesion areas on DWI and DCE modalities of MRI and reducing the scale of the uncertain parts. Experiment results show that FCM-DS can fuse the DWI and DCE images to achieve accurate segmentation and display the statement of benign or malignancy of lesion area by Time-Intensity Curve (TIC), which could be beneficial in making preoperative treatment plan and evaluating the effect of the therapy.
A sensorimotor theory of temporal tracking and beat induction.
Todd, N P McAngus; Lee, C S; O'Boyle, D J
2002-02-01
In this paper, we develop a theory of the neurobiological basis of temporal tracking and beat induction as a form of sensory-guided action. We propose three principal components for the neurological architecture of temporal tracking: (1) the central auditory system, which represents the temporal information in the input signal in the form of a modulation power spectrum; (2) the musculoskeletal system, which carries out the action and (3) a controller, in the form of a parieto-cerebellar-frontal loop, which carries out the synchronisation between input and output by means of an internal model of the musculoskeletal dynamics. The theory is implemented in the form of a computational algorithm which takes sound samples as input and synchronises a simple linear mass-spring-damper system to simulate audio-motor synchronisation. The model may be applied to both the tracking of isochronous click sequences and beat induction in rhythmic music or speech, and also accounts for the approximate Weberian property of timing.
NASA Astrophysics Data System (ADS)
Lanzalaco, Felix; Pissanetzky, Sergio
2013-12-01
A recent theory of physical information based on the fundamental principles of causality and thermodynamics has proposed that a large number of observable life and intelligence signals can be described in terms of the Causal Mathematical Logic (CML), which is proposed to encode the natural principles of intelligence across any physical domain and substrate. We attempt to expound the current definition of CML, the "Action functional" as a theory in terms of its ability to possess a superior explanatory power for the current neuroscientific data we use to measure the mammalian brains "intelligence" processes at its most general biophysical level. Brain simulation projects define their success partly in terms of the emergence of "non-explicitly programmed" complex biophysical signals such as self-oscillation and spreading cortical waves. Here we propose to extend the causal theory to predict and guide the understanding of these more complex emergent "intelligence Signals". To achieve this we review whether causal logic is consistent with, can explain and predict the function of complete perceptual processes associated with intelligence. Primarily those are defined as the range of Event Related Potentials (ERP) which include their primary subcomponents; Event Related Desynchronization (ERD) and Event Related Synchronization (ERS). This approach is aiming for a universal and predictive logic for neurosimulation and AGi. The result of this investigation has produced a general "Information Engine" model from translation of the ERD and ERS. The CML algorithm run in terms of action cost predicts ERP signal contents and is consistent with the fundamental laws of thermodynamics. A working substrate independent natural information logic would be a major asset. An information theory consistent with fundamental physics can be an AGi. It can also operate within genetic information space and provides a roadmap to understand the live biophysical operation of the phenotype
A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement
NASA Astrophysics Data System (ADS)
Koner, P.; Battaglia, A.; Simmer, C.
2009-04-01
The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.
Djioua, Moussa; Plamondon, Réjean
2009-11-01
In this paper, we present a new analytical method for estimating the parameters of Delta-Lognormal functions and characterizing handwriting strokes. According to the Kinematic Theory of rapid human movements, these parameters contain information on both the motor commands and the timing properties of a neuromuscular system. The new algorithm, called XZERO, exploits relationships between the zero crossings of the first and second time derivatives of a lognormal function and its four basic parameters. The methodology is described and then evaluated under various testing conditions. The new tool allows a greater variety of stroke patterns to be processed automatically. Furthermore, for the first time, the extraction accuracy is quantified empirically, taking advantage of the exponential relationships that link the dispersion of the extraction errors with its signal-to-noise ratio. A new extraction system which combines this algorithm with two other previously published methods is also described and evaluated. This system provides researchers involved in various domains of pattern analysis and artificial intelligence with new tools for the basic study of single strokes as primitives for understanding rapid human movements.
An analytical algorithm for 3D magnetic field mapping of a watt balance magnet
NASA Astrophysics Data System (ADS)
Fu, Zhuang; Zhang, Zhonghua; Li, Zhengkun; Zhao, Wei; Han, Bing; Lu, Yunfeng; Li, Shisong
2016-04-01
A yoke-based permanent magnet, which has been employed in many watt balances at national metrology institutes, is supposed to generate strong and uniform magnetic field in an air gap in the radial direction. However, in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap, which should either be measured or modeled towards some optimizations of the watt balance. A recent publication, i.e. Li et al (2015 Metrologia 52 445), presented a full field mapping method, which in theory will supply useful information for profile characterization and misalignment analysis. This article is an additional material of Li et al (2015 Metrologia 52 445), which develops a different analytical algorithm to represent the 3D magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction, B r (z). The new algorithm is based on the electromagnetic nature of the magnet, which has a much better accuracy.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.
2001-03-01
Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).
Optimal aperture synthesis radar imaging
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Chau, J. L.
2006-03-01
Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.
Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin
How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.
Salehifar, Mehdi; Moreno-Equilaz, Manuel
2016-01-01
Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Liu, L L; Liu, M J; Ma, M
2015-09-28
The central task of this study was to mine the gene-to-medium relationship. Adequate knowledge of this relationship could potentially improve the accuracy of differentially expressed gene mining. One of the approaches to differentially expressed gene mining uses conventional clustering algorithms to identify the gene-to-medium relationship. Compared to conventional clustering algorithms, self-organization maps (SOMs) identify the nonlinear aspects of the gene-to-medium relationships by mapping the input space into another higher dimensional feature space. However, SOMs are not suitable for huge datasets consisting of millions of samples. Therefore, a new computational model, the Function Clustering Self-Organization Maps (FCSOMs), was developed. FCSOMs take advantage of the theory of granular computing as well as advanced statistical learning methodologies, and are built specifically for each information granule (a function cluster of genes), which are intelligently partitioned by the clustering algorithm provided by the DAVID_6.7 software platform. However, only the gene functions, and not their expression values, are considered in the fuzzy clustering algorithm of DAVID. Compared to the clustering algorithm of DAVID, these experimental results show a marked improvement in the accuracy of classification with the application of FCSOMs. FCSOMs can handle huge datasets and their complex classification problems, as each FCSOM (modeled for each function cluster) can be easily parallelized.
A localization algorithm of adaptively determining the ROI of the reference circle in image
NASA Astrophysics Data System (ADS)
Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.
Norman, Kenneth A; Newman, Ehren L; Perotte, Adler J
2005-11-01
The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.
NASA Astrophysics Data System (ADS)
Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.
2007-03-01
The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
SMD-based numerical stochastic perturbation theory
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Lüscher, Martin
2017-05-01
The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.
A Functional Programming Approach to AI Search Algorithms
ERIC Educational Resources Information Center
Panovics, Janos
2012-01-01
The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
Optical implementation of the synthetic discriminant function
NASA Astrophysics Data System (ADS)
Butler, S.; Riggins, J.
1984-10-01
Much attention is focused on the use of coherent optical pattern recognition (OPR) using matched spatial filters for robotics and intelligent systems. The OPR problem consists of three aspects -- information input, information processing, and information output. This paper discusses the information processing aspect which consists of choosing a filter to provide robust correlation with high efficiency. The filter should ideally be invariant to image shift, rotation and scale, provide a reasonable signal-to-noise (S/N) ratio and allow high throughput efficiency. The physical implementation of a spatial matched filter involves many choices. These include the use of conventional holograms or computer-generated holograms (CGH) and utilizing absorption or phase materials. Conventional holograms inherently modify the reference image by non-uniform emphasis of spatial frequencies. Proper use of film nonlinearity provides improved filter performance by emphasizing frequency ranges crucial to target discrimination. In the case of a CGH, the emphasis of the reference magnitude and phase can be controlled independently of the continuous tone or binary writing processes. This paper describes computer simulation and optical implementation of a geometrical shape and a Synthetic Discriminant Function (SDF) matched filter. The authors chose the binary Allebach-Keegan (AK) CGH algorithm to produce actual filters. The performances of these filters were measured to verify the simulation results. This paper provides a brief summary of the matched filter theory, the SDF, CGH algorithms, Phase-Only-Filtering, simulation procedures, and results.
Finding regions of interest in pathological images: an attentional model approach
NASA Astrophysics Data System (ADS)
Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo
2009-02-01
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.
Dynamic Grover search: applications in recommendation systems and optimization problems
NASA Astrophysics Data System (ADS)
Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep
2017-06-01
In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
An algorithm of opinion leaders mining based on signed network
NASA Astrophysics Data System (ADS)
Cao, Linlin; Zheng, Mingchun; Zhang, Yuanyuan; Zhang, Fuming
2018-04-01
With the rapid development of mobile Internet, user gradually become the leader of social media, the abruptly rise of new media has changed the traditional information's dissemination pattern and regularity. There is new era significance of opinion leaders, gatekeepers in the classical theory of mass communication, and it has further expansion and extension to a certain extent. In the existing mining of opinion leaders, it is mainly from the research of network structure and user behavior without considering an important attribute: whether the user has a real impact. In this paper, we take the symbolic network as the research tool, by giving symbol which correspondingly represents support or oppose to the link about point of view relationship between users and combining traditional algorithms of mining with symbolism which can describe the change of view between users, we will get the opinion leader who has real impact on users, then the result is more accurate and effective.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
An Uncertainty-Based Distributed Fault Detection Mechanism for Wireless Sensor Networks
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-01-01
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio. PMID:24776937
Goshvarpour, Ateke; Goshvarpour, Atefeh
2018-04-30
Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.
A computational intelligent approach to multi-factor analysis of violent crime information system
NASA Astrophysics Data System (ADS)
Liu, Hongbo; Yang, Chao; Zhang, Meng; McLoone, Seán; Sun, Yeqing
2017-02-01
Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.
Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.
Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing
2018-05-01
It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
1991-01-01
An analysis of flight measurements made near a wake vortex was conducted to explore the feasibility of providing a pilot with useful wake avoidance information. The measurements were made with relatively low cost flow and motion sensors on a light airplane flying near the wake vortex of a turboprop airplane weighing approximately 90000 lbs. Algorithms were developed which removed the response of the airplane to control inputs from the total airplane response and produced parameters which were due solely to the flow field of the vortex. These parameters were compared with values predicted by potential theory. The results indicated that the presence of the vortex could be detected by a combination of parameters derived from the simple sensors. However, the location and strength of the vortex cannot be determined without additional and more accurate sensors.
Decryption of pure-position permutation algorithms.
Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang
2004-07-01
Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
a Band Selection Method for High Precision Registration of Hyperspectral Image
NASA Astrophysics Data System (ADS)
Yang, H.; Li, X.
2018-04-01
During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong
2017-02-17
As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
NASA Astrophysics Data System (ADS)
Zhu, Maohu; Jie, Nanfeng; Jiang, Tianzi
2014-03-01
A reliable and precise classification of schizophrenia is significant for its diagnosis and treatment of schizophrenia. Functional magnetic resonance imaging (fMRI) is a novel tool increasingly used in schizophrenia research. Recent advances in statistical learning theory have led to applying pattern classification algorithms to access the diagnostic value of functional brain networks, discovered from resting state fMRI data. The aim of this study was to propose an adaptive learning algorithm to distinguish schizophrenia patients from normal controls using resting-state functional language network. Furthermore, here the classification of schizophrenia was regarded as a sample selection problem where a sparse subset of samples was chosen from the labeled training set. Using these selected samples, which we call informative vectors, a classifier for the clinic diagnosis of schizophrenia was established. We experimentally demonstrated that the proposed algorithm incorporating resting-state functional language network achieved 83.6% leaveone- out accuracy on resting-state fMRI data of 27 schizophrenia patients and 28 normal controls. In contrast with KNearest- Neighbor (KNN), Support Vector Machine (SVM) and l1-norm, our method yielded better classification performance. Moreover, our results suggested that a dysfunction of resting-state functional language network plays an important role in the clinic diagnosis of schizophrenia.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
NASA Astrophysics Data System (ADS)
Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.
2017-11-01
We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Pande, S.
2009-12-01
Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure can be applied for appropriate model choice within the context of applications in hydrologic modeling intended for use in studies about water resources and water resources management and their direct relation to extreme conditions or natural hazards.
NASA Astrophysics Data System (ADS)
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
Information-theoretic approach to interactive learning
NASA Astrophysics Data System (ADS)
Still, S.
2009-01-01
The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.
Is the local linearity of space-time inherited from the linearity of probabilities?
NASA Astrophysics Data System (ADS)
Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.
2017-02-01
The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.
Fiddick, L; Cosmides, L; Tooby, J
2000-10-16
The Wason selection task is a tool used to study reasoning about conditional rules. Performance on this task changes systematically when one varies its content, and these content effects have been used to argue that the human cognitive architecture contains a number of domain-specific representation and inference systems, such as social contract algorithms and hazard management systems. Recently, however, Sperber, Cara & Girotto (Sperber, D., Cara, F., & Girotto, V. (1995). Relevance theory explains the selection task. Cognition, 57, 31-95) have proposed that relevance theory can explain performance on the selection task - including all content effects - without invoking inference systems that are content-specialized. Herein, we show that relevance theory alone cannot explain a variety of content effects - effects that were predicted in advance and are parsimoniously explained by theories that invoke domain-specific algorithms for representing and making inferences about (i) social contracts and (ii) reducing risk in hazardous situations. Moreover, although Sperber et al. (1995) were able to use relevance theory to produce some new content effects in other domains, they conducted no experiments involving social exchanges or precautions, and so were unable to determine which - content-specialized algorithms or relevance effects - dominate reasoning when the two conflict. When experiments, reported herein, are constructed so that the different theories predict divergent outcomes, the results support the predictions of social contract theory and hazard management theory, indicating that these inference systems override content-general relevance factors. The fact that social contract and hazard management algorithms provide better explanations for performance in their respective domains does not mean that the content-general logical procedures posited by relevance theory do not exist, or that relevance effects never occur. It does mean, however, that one needs a principled way of explaining which effects will dominate when a set of inputs activate more than one reasoning system. We propose the principle of pre-emptive specificity - that the human cognitive architecture should be designed so that more specialized inference systems pre-empt more general ones whenever the stimuli centrally fit the input conditions of the more specialized system. This principle follows from evolutionary and computational considerations that are common to both relevance theory and the ecological rationality approach.
Harmony search method: theory and applications.
Gao, X Z; Govindasamy, V; Xu, H; Wang, X; Zenger, K
2015-01-01
The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.
ERIC Educational Resources Information Center
Kim, Seonghoon
2013-01-01
With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
dynamics of the social network are known. In theory, our online algorithms could be used to adapt a marketing campaign to unknown or time-varying social...An Online Algorithm for Maximizing Submodular Functions Matthew Streeter Daniel Golovin December 20, 2007 CMU-CS-07-171 School of Computer Science...number. 1. REPORT DATE 20 DEC 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE An Online Algorithm for
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
2016-01-01
Motivation: Gene tree represents the evolutionary history of gene lineages that originate from multiple related populations. Under the multispecies coalescent model, lineages may coalesce outside the species (population) boundary. Given a species tree (with branch lengths), the gene tree probability is the probability of observing a specific gene tree topology under the multispecies coalescent model. There are two existing algorithms for computing the exact gene tree probability. The first algorithm is due to Degnan and Salter, where they enumerate all the so-called coalescent histories for the given species tree and the gene tree topology. Their algorithm runs in exponential time in the number of gene lineages in general. The second algorithm is the STELLS algorithm (2012), which is usually faster but also runs in exponential time in almost all the cases. Results: In this article, we present a new algorithm, called CompactCH, for computing the exact gene tree probability. This new algorithm is based on the notion of compact coalescent histories: multiple coalescent histories are represented by a single compact coalescent history. The key advantage of our new algorithm is that it runs in polynomial time in the number of gene lineages if the number of populations is fixed to be a constant. The new algorithm is more efficient than the STELLS algorithm both in theory and in practice when the number of populations is small and there are multiple gene lineages from each population. As an application, we show that CompactCH can be applied in the inference of population tree (i.e. the population divergence history) from population haplotypes. Simulation results show that the CompactCH algorithm enables efficient and accurate inference of population trees with much more haplotypes than a previous approach. Availability: The CompactCH algorithm is implemented in the STELLS software package, which is available for download at http://www.engr.uconn.edu/ywu/STELLS.html. Contact: ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307621
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Yousef
2014-03-19
The master project under which this work is funded had as its main objective to develop computational methods for modeling electronic excited-state and optical properties of various nanostructures. The specific goals of the computer science group were primarily to develop effective numerical algorithms in Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TDDFT). There were essentially four distinct stated objectives. The first objective was to study and develop effective numerical algorithms for solving large eigenvalue problems such as those that arise in Density Functional Theory (DFT) methods. The second objective was to explore so-called linear scaling methods ormore » Methods that avoid diagonalization. The third was to develop effective approaches for Time-Dependent DFT (TDDFT). Our fourth and final objective was to examine effective solution strategies for other problems in electronic excitations, such as the GW/Bethe-Salpeter method, and quantum transport problems.« less
Dehazed Image Quality Assessment by Haze-Line Theory
NASA Astrophysics Data System (ADS)
Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai
2017-06-01
Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.
Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints
NASA Astrophysics Data System (ADS)
Sun, Chao; Hu, Guoqiang
2018-05-01
In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.
Evolutionary tree reconstruction
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Bob
1990-01-01
It is described how Minimum Description Length (MDL) can be applied to the problem of DNA and protein evolutionary tree reconstruction. If there is a set of mutations that transform a common ancestor into a set of the known sequences, and this description is shorter than the information to encode the known sequences directly, then strong evidence for an evolutionary relationship has been found. A heuristic algorithm is described that searches for the simplest tree (smallest MDL) that finds close to optimal trees on the test data. Various ways of extending the MDL theory to more complex evolutionary relationships are discussed.
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
A phase space approach to imaging from limited data
NASA Astrophysics Data System (ADS)
Testorf, Markus E.
2015-09-01
The optical instrument function is used as the basis to develop optical system theory for imaging applications. The detection of optical signals is conveniently described as the overlap integral of the Wigner distribution functions of instrument and optical signal. Based on this framework various optical imaging systems, including plenoptic cameras, phase-retrieval algorithms, and Shack-Hartman sensors are shown to acquire information about a domain in phase-space, with finite extension and finite resolution. It is demonstrated how phase space optics can be used both to analyze imaging systems, as well as for designing methods for image reconstruction.
Theory of post-block 2 VLBI observable extraction
NASA Technical Reports Server (NTRS)
Lowe, Stephen T.
1992-01-01
The algorithms used in the post-Block II fringe-fitting software called 'Fit' are described. The steps needed to derive the very long baseline interferometry (VLBI) charged-particle corrected group delay, phase delay rate, and phase delay (the latter without resolving cycle ambiguities) are presented beginning with the set of complex fringe phasors as a function of observation frequency and time. The set of complex phasors is obtained from the JPL/CIT Block II correlator. The output of Fit is the set of charged-particle corrected observables (along with ancillary information) in a form amenable to the software program 'Modest.'
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
NASA Astrophysics Data System (ADS)
Shaw, Gordon L.; Silverman, Dennis J.; Pearson, John C.
1985-04-01
Motivated by V. B. Mountcastle's organizational principle for neocortical function, and by M. E. Fisher's model of physical spin systems, we introduce a cooperative model of the cortical column incorporating an idealized substructure, the trion, which represents a localized group of neurons. Computer studies reveal that typical networks composed of a small number of trions (with symmetric interactions) exhibit striking behavior--e.g., hundreds to thousands of quasi-stable, periodic firing patterns, any of which can be selected out and enhanced with only small changes in interaction strengths by using a Hebb-type algorithm.
Theory Learning as Stochastic Search in the Language of Thought
ERIC Educational Resources Information Center
Ullman, Tomer D.; Goodman, Noah D.; Tenenbaum, Joshua B.
2012-01-01
We present an algorithmic model for the development of children's intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic context-free grammar. We contrast our approach with connectionist and other emergentist approaches to modeling cognitive development. While…
Blind restoration method of three-dimensional microscope image based on RL algorithm
NASA Astrophysics Data System (ADS)
Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li
2013-08-01
Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.
NASA Technical Reports Server (NTRS)
Rediniotis, Othon K.
1999-01-01
Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.
Kulasiri, Don
2011-01-01
We discuss the quantification of molecular fluctuations in the biochemical reaction systems within the context of intracellular processes associated with gene expression. We take the molecular reactions pertaining to circadian rhythms to develop models of molecular fluctuations in this chapter. There are a significant number of studies on stochastic fluctuations in intracellular genetic regulatory networks based on single cell-level experiments. In order to understand the fluctuations associated with the gene expression in circadian rhythm networks, it is important to model the interactions of transcriptional factors with the E-boxes in the promoter regions of some of the genes. The pertinent aspects of a near-equilibrium theory that would integrate the thermodynamical and particle dynamic characteristics of intracellular molecular fluctuations would be discussed, and the theory is extended by using the theory of stochastic differential equations. We then model the fluctuations associated with the promoter regions using general mathematical settings. We implemented ubiquitous Gillespie's algorithms, which are used to simulate stochasticity in biochemical networks, for each of the motifs. Both the theory and the Gillespie's algorithms gave the same results in terms of the time evolution of means and variances of molecular numbers. As biochemical reactions occur far away from equilibrium-hence the use of the Gillespie algorithm-these results suggest that the near-equilibrium theory should be a good approximation for some of the biochemical reactions. © 2011 Elsevier Inc. All rights reserved.
What Density Functional Theory could do for Quantum Information
NASA Astrophysics Data System (ADS)
Mattsson, Ann
2015-03-01
The Hohenberg-Kohn theorem of Density Functional Theory (DFT), and extensions thereof, tells us that all properties of a system of electrons can be determined through their density, which uniquely determines the many-body wave-function. Given access to the appropriate, universal, functionals of the density we would, in theory, be able to determine all observables of any electronic system, without explicit reference to the wave-function. On the other hand, the wave-function is at the core of Quantum Information (QI), with the wave-function of a set of qubits being the central computational resource in a quantum computer. While there is seemingly little overlap between DFT and QI, reliance upon observables form a key connection. Though the time-evolution of the wave-function and associated phase information is fundamental to quantum computation, the initial and final states of a quantum computer are characterized by observables of the system. While observables can be extracted directly from a system's wave-function, DFT tells us that we may be able to intuit a method for extracting them from its density. In this talk, I will review the fundamentals of DFT and how these principles connect to the world of QI. This will range from DFT's utility in the engineering of physical qubits, to the possibility of using it to efficiently (but approximately) simulate Hamiltonians at the logical level. The apparent paradox of describing algorithms based on the quantum mechanical many-body wave-function with a DFT-like theory based on observables will remain a focus throughout. The ultimate goal of this talk is to initiate a dialog about what DFT could do for QI, in theory and in practice. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Optimizing an experimental design for an electromagnetic experiment
NASA Astrophysics Data System (ADS)
Roux, Estelle; Garcia, Xavier
2013-04-01
Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
Studies in the Theory of Quantum Games
NASA Astrophysics Data System (ADS)
Iqbal, Azhar
2005-03-01
Theory of quantum games is a new area of investigation that has gone through rapid development during the last few years. Initial motivation for playing games, in the quantum world, comes from the possibility of re-formulating quantum communication protocols, and algorithms, in terms of games between quantum and classical players. The possibility led to the view that quantum games have a potential to provide helpful insight into working of quantum algorithms, and even in finding new ones. This thesis analyzes and compares some interesting games when played classically and quantum mechanically. A large part of the thesis concerns investigations into a refinement notion of the Nash equilibrium concept. The refinement, called an evolutionarily stable strategy (ESS), was originally introduced in 1970s by mathematical biologists to model an evolving population using techniques borrowed from game theory. Analysis is developed around a situation when quantization changes ESSs without affecting corresponding Nash equilibria. Effects of quantization on solution-concepts other than Nash equilibrium are presented and discussed. For this purpose the notions of value of coalition, backwards-induction outcome, and subgame-perfect outcome are selected. Repeated games are known to have different information structure than one-shot games. Investigation is presented into a possible way where quantization changes the outcome of a repeated game. Lastly, two new suggestions are put forward to play quantum versions of classical matrix games. The first one uses the association of De Broglie's waves, with travelling material objects, as a resource for playing a quantum game. The second suggestion concerns an EPR type setting exploiting directly the correlations in Bell's inequalities to play a bi-matrix game.
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Fuzzy Algorithm for the Detection of Incidents in the Transport System
ERIC Educational Resources Information Center
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Stroganov, Victor Yu.
2016-01-01
In the paper it's proposed an algorithm for the management of traffic incidents, aimed at minimizing the impact of incidents on the road traffic in general. The proposed algorithm is based on the theory of fuzzy sets and provides identification of accidents, as well as the adoption of appropriate measures to address them as soon as possible. A…
ERIC Educational Resources Information Center
Cai, Li
2013-01-01
Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…
Mass preserving registration for heart MR images.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-01-01
This paper presents a new algorithm for non-rigid registration between two doubly-connected regions. Our algorithm is based on harmonic analysis and the theory of optimal mass transport. It assumes an underlining continuum model, in which the total amount of mass is exactly preserved during the transformation of tissues. We use a finite element approach to numerically implement the algorithm.
Mass Preserving Registration for Heart MR Images
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
This paper presents a new algorithm for non-rigid registration between two doubly-connected regions. Our algorithm is based on harmonic analysis and the theory of optimal mass transport. It assumes an underlining continuum model, in which the total amount of mass is exactly preserved during the transformation of tissues. We use a finite element approach to numerically implement the algorithm. PMID:16685954
A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alex, Arne; Delft, Jan von; Kalus, Matthias
2011-02-15
We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).
Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database
2017-01-01
Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799
An Algorithm for Integrated Subsystem Embodiment and System Synthesis
NASA Technical Reports Server (NTRS)
Lewis, Kemper
1997-01-01
Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.
Visual to Parametric Interaction (V2PI)
Maiti, Dipayan; Endert, Alex; North, Chris
2013-01-01
Typical data visualizations result from linear pipelines that start by characterizing data using a model or algorithm to reduce the dimension and summarize structure, and end by displaying the data in a reduced dimensional form. Sensemaking may take place at the end of the pipeline when users have an opportunity to observe, digest, and internalize any information displayed. However, some visualizations mask meaningful data structures when model or algorithm constraints (e.g., parameter specifications) contradict information in the data. Yet, due to the linearity of the pipeline, users do not have a natural means to adjust the displays. In this paper, we present a framework for creating dynamic data displays that rely on both mechanistic data summaries and expert judgement. The key is that we develop both the theory and methods of a new human-data interaction to which we refer as “ Visual to Parametric Interaction” (V2PI). With V2PI, the pipeline becomes bi-directional in that users are embedded in the pipeline; users learn from visualizations and the visualizations adjust to expert judgement. We demonstrate the utility of V2PI and a bi-directional pipeline with two examples. PMID:23555552
Nonadiabatic Dynamics for Electrons at Second-Order: Real-Time TDDFT and OSCF2.
Nguyen, Triet S; Parkhill, John
2015-07-14
We develop a new model to simulate nonradiative relaxation and dephasing by combining real-time Hartree-Fock and density functional theory (DFT) with our recent open-systems theory of electronic dynamics. The approach has some key advantages: it has been systematically derived and properly relaxes noninteracting electrons to a Fermi-Dirac distribution. This paper combines the new dissipation theory with an atomistic, all-electron quantum chemistry code and an atom-centered model of the thermal environment. The environment is represented nonempirically and is dependent on molecular structure in a nonlocal way. A production quality, O(N(3)) closed-shell implementation of our theory applicable to realistic molecular systems is presented, including timing information. This scaling implies that the added cost of our nonadiabatic relaxation model, time-dependent open self-consistent field at second order (OSCF2), is computationally inexpensive, relative to adiabatic propagation of real-time time-dependent Hartree-Fock (TDHF) or time-dependent density functional theory (TDDFT). Details of the implementation and numerical algorithm, including factorization and efficiency, are discussed. We demonstrate that OSCF2 approaches the stationary self-consistent field (SCF) ground state when the gap is large relative to k(b)T. The code is used to calculate linear-response spectra including the effects of bath dynamics. Finally, we show how our theory of finite-temperature relaxation can be used to correct ground-state DFT calculations.
A Brightness-Referenced Star Identification Algorithm for APS Star Trackers
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-01-01
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950
A brightness-referenced star identification algorithm for APS star trackers.
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-10-08
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4~5 times that of the pyramid method and 35~37 times that of the geometric method.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Kim, Wangdo; Espanha, Margarida M.; Veloso, António P.; Araújo, Duarte; João, Filipa; Carrão, Luis; Kohles, Sean S.
2013-01-01
Traditional locomotion studies emphasize an optimization of the desired movement trajectories while ignoring sensory feedback. We propose an information based theory that locomotion is neither triggered nor commanded but controlled. The basis for this control is the information derived from perceiving oneself in the world. Control therefore lies in the human-environment system. In order to test this hypothesis, we derived a mathematical foundation characterizing the energy that is required to perform a rotational twist, with small amplitude, of the instantaneous axes of the knee (IAK). We have found that the joint’s perception of the ground reaction force may be replaced by the co-perception of muscle activation with appropriate intensities. This approach generated an accurate comparison with known joint forces and appears appropriate in so far as predicting the effect on the knee when it is free to twist about the IAK. PMID:24932433
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
Nakajima, Toshiyuki
2015-12-01
Higher animals act in the world using their external reality models to cope with the uncertain environment. Organisms that have not developed such information-processing organs may also have external reality models built in the form of their biochemical, physiological, and behavioral structures, acquired by natural selection through successful models constructed internally. Organisms subject to illusions would fail to survive in the material universe. How can organisms, or living systems in general, determine the external reality from within? This paper starts with a phenomenological model, in which the self constitutes a reality model developed through the mental processing of phenomena. Then, the it-from-bit concept is formalized using a simple mathematical model. For this formalization, my previous work on an algorithmic process is employed to constitute symbols referring to the external reality, called the inverse causality, with additional improvements to the previous work. Finally, as an extension of this model, the cognizers system model is employed to describe the self as one of many material entities in a world, each of which acts as a subject by responding to the surrounding entities. This model is used to propose a conceptual framework of information theory that can deal with both the qualitative (semantic) and quantitative aspects of the information involved in biological processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giacometti, Achille, E-mail: achille.giacometti@unive.it; Gögelein, Christoph, E-mail: christoph.goegelein@ds.mpg.de; Lado, Fred, E-mail: lado@ncsu.edu
2014-03-07
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to themore » Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function.« less
GTRAC: fast retrieval from compressed collections of genomic variants
Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy
2016-01-01
Motivation: The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. Results: We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. Availability and Implementation: The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC Contact: kedart@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587665
Moisan, Frédéric; Gonzalez, Cleotilde
2017-01-01
Game Theory is a common approach used to understand attacker and defender motives, strategies, and allocation of limited security resources. For example, many defense algorithms are based on game-theoretic solutions that conclude that randomization of defense actions assures unpredictability, creating difficulties for a human attacker. However, many game-theoretic solutions often rely on idealized assumptions of decision making that underplay the role of human cognition and information uncertainty. The consequence is that we know little about how effective these algorithms are against human players. Using a simplified security game, we study the type of attack strategy and the uncertainty about an attacker's strategy in a laboratory experiment where participants play the role of defenders against a simulated attacker. Our goal is to compare a human defender's behavior in three levels of uncertainty (Information Level: Certain, Risky, Uncertain) and three types of attacker's strategy (Attacker's strategy: Minimax, Random, Adaptive) in a between-subjects experimental design. Best defense performance is achieved when defenders play against a minimax and a random attack strategy compared to an adaptive strategy. Furthermore, when payoffs are certain, defenders are as efficient against random attack strategy as they are against an adaptive strategy, but when payoffs are uncertain, defenders have most difficulties defending against an adaptive attacker compared to a random attacker. We conclude that given conditions of uncertainty in many security problems, defense algorithms would be more efficient if they are adaptive to the attacker actions, taking advantage of the attacker's human inefficiencies. PMID:28690557
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
Applied optics and optical engineering. Volume 9
NASA Astrophysics Data System (ADS)
Shannon, R. R.; Wyant, J. C.
This volume of the series continues the variety of articles which have appeared in the past two volumes. The range of topics runs from the most applied to some fairly complex theory. Basic algorithms for optical engineering are considered along with diffraction gratings, recording and reading of information on optical disks, and the perfect point spread function. An atlas is provided of the imagery which can be expected from telescopes, or other optical systems with nonconventional aperture shapes. A chapter on simulators collects descriptions of the principal types of visual simulators into one article. The optics of the eye is discussed with special emphasis on modern concepts such as contact lenses and automatic refraction. Attention is given to the theory which is of great interest to the applied optical engineer who needs to understand some of the fundamental relations and limitations on image formation by lenses.
Information Complexity and Biology
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Bignone, Franco A.; Cecconi, Fabio; Politi, Antonio
Kolmogorov contributed directly to Biology in essentially three problems: the analysis of population dynamics (Lotka-Volterra equations), the reaction-diffusion formulation of gene spreading (FKPP equation), and some discussions about Mendel's laws. However, the widely recognized importance of his contribution arises from his work on algorithmic complexity. In fact, the limited direct intervention in Biology reflects the generally slow growth of interest of mathematicians towards biological issues. From the early work of Vito Volterra on species competition, to the slow growth of dynamical systems theory, contributions to the study of matter and the physiology of the nervous system, the first 50-60 years have witnessed important contributions, but as scattered pieces apparently uncorrelated, and in branches often far away from Biology. Up to the 40' it is hard to see the initial loose build up of a convergence, for those theories that will become mainstream research by the end of the century, and connected by the study of biological systems per-se.
The theory of variational hybrid quantum-classical algorithms
NASA Astrophysics Data System (ADS)
McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán
2016-02-01
Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Kawakami, Hiroshi; Tsujimura, Yasuhiro; Handa, Hisashi; Lin, Lin; Okamoto, Azuma
As efficient utilization of computational resources is increasing, evolutionary technology based on the Genetic Algorithm (GA), Genetic Programming (GP), Evolution Strategy (ES) and other Evolutionary Computations (ECs) is making rapid progress, and its social recognition and the need as applied technology are increasing. This is explained by the facts that EC offers higher robustness for knowledge information processing systems, intelligent production and logistics systems, most advanced production scheduling and other various real-world problems compared to the approaches based on conventional theories, and EC ensures flexible applicability and usefulness for any unknown system environment even in a case where accurate mathematical modeling fails in the formulation. In this paper, we provide a comprehensive survey of the current state-of-the-art in the fundamentals and applications of evolutionary technologies.
A Petri net synthesis theory for modeling flexible manufacturing systems.
Jeng, M D
1997-01-01
A theory that synthesizes Petri nets for modeling flexible manufacturing systems is presented. The theory adopts a bottom-up or modular-composition approach to construct net models. Each module is modeled as a resource control net (RCN), which represents a subsystem that controls a resource type in a flexible manufacturing system. Interactions among the modules are described as the common transition and transition subnets. The net obtained by merging the modules with two minimal restrictions is shown to be conservative and thus bounded. An algorithm is developed to detect two sufficient conditions for structural liveness of the net. The algorithm examines only the net's structure and the initial marking, and appears to be more efficient than state enumeration techniques such as the reachability tree method. In this paper, the sufficient conditions for liveness are shown to be related to some structural objects called siphons. To demonstrate the applicability of the theory, a flexible manufacturing system of a moderate size is modeled and analyzed using the proposed theory.
Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items
ERIC Educational Resources Information Center
Aybek, Eren Can; Demirtasli, R. Nukhet
2017-01-01
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
NASA Technical Reports Server (NTRS)
Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.
1993-01-01
This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
Harmony Search Method: Theory and Applications
Gao, X. Z.; Govindasamy, V.; Xu, H.; Wang, X.; Zenger, K.
2015-01-01
The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem. PMID:25945083
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
Evidence for an inhibitory-control theory of the reasoning brain.
Houdé, Olivier; Borst, Grégoire
2015-01-01
In this article, we first describe our general inhibitory-control theory and, then, we describe how we have tested its specific hypotheses on reasoning with brain imaging techniques in adults and children. The innovative part of this perspective lies in its attempt to come up with a brain-based synthesis of Jean Piaget's theory on logical algorithms and Daniel Kahneman's theory on intuitive heuristics.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
Analysis of image thresholding segmentation algorithms based on swarm intelligence
NASA Astrophysics Data System (ADS)
Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo
2013-03-01
Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.
Searching Information Sources in Networks
2017-06-14
SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.
Cao, Xiang; Zhu, Daqi; Yang, Simon X
2016-11-01
Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.
[Improved euler algorithm for trend forecast model and its application to oil spectrum analysis].
Zheng, Chang-song; Ma, Biao
2009-04-01
The oil atomic spectrometric analysis technology is one of the most important methods for fault diagnosis and state monitoring of large machine equipment. The gray method is preponderant in the trend forecast at the same time. With the use of oil atomic spectrometric analysis result and combining the gray forecast theory, the present paper established a gray forecast model of the Fe/Cu concentration trend in the power-shift steering transmission. Aiming at the shortage of the gray method used in the trend forecast, the improved Euler algorithm was put forward for the first time to resolve the problem of the gray model and avoid the non-precision that the old gray model's forecast value depends on the first test value. This new method can make the forecast value more precision as shown in the example. Combined with the threshold value of the oil atomic spectrometric analysis, the new method was applied on the Fe/Cu concentration forecast and the premonition of fault information was obtained. So we can take steps to prevent the fault and this algorithm can be popularized to the state monitoring in the industry.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks
Salim, Shelly; Moh, Sangman
2016-01-01
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead. PMID:27376290
NASA Astrophysics Data System (ADS)
Liu, Zhijian; Yin, Donghui; Yan, Jun
2017-05-01
Low frequency oscillation is still frequently happened in the power system and it affects the safety and stability of power system directly. With the continuously expending of the interconnection scale of power grid, the risk of low frequency oscillation becomes more and more noticeable. Firstly, the basic theory of port-controlled Hamilton (PCH) and its application is analyzed. Secondly, based on the PCH theory and the dynamic model of system, from the viewpoint of energy, the nonlinear stability controller of power system is designed. By the improved genetic algorithm, the parameters of the PCH model are optimized. Finally, a simulation model with PCH is built to vary the effectiveness of the method proposed in this paper.
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks.
Salim, Shelly; Moh, Sangman
2016-06-30
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead.
Modeling Sediment Detention Ponds Using Reactor Theory and Advection-Diffusion Concepts
NASA Astrophysics Data System (ADS)
Wilson, Bruce N.; Barfield, Billy J.
1985-04-01
An algorithm is presented to model the sedimentation process in detention ponds. This algorithm is based on a mass balance for an infinitesimal layer that couples reactor theory concepts with advection-diffusion processes. Reactor theory concepts are used to (1) determine residence time of sediment particles and to (2) mix influent sediment with previously stored flow. Advection-diffusion processes are used to model the (1) settling characteristics of sediment and the (2) vertical diffusion of sediment due to turbulence. Predicted results of the model are compared to those observed on two pilot scale ponds for a total of 12 runs. The average percent error between predicted and observed trap efficiency was 5.2%. Overall, the observed sedimentology values were predicted with reasonable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weigend, Florian, E-mail: florian.weigend@kit.edu
2014-10-07
Energy surfaces of metal clusters usually show a large variety of local minima. For homo-metallic species the energetically lowest can be found reliably with genetic algorithms, in combination with density functional theory without system-specific parameters. For mixed-metallic clusters this is much more difficult, as for a given arrangement of nuclei one has to find additionally the best of many possibilities of assigning different metal types to the individual positions. In the framework of electronic structure methods this second issue is treatable at comparably low cost at least for elements with similar atomic number by means of first-order perturbation theory, asmore » shown previously [F. Weigend, C. Schrodt, and R. Ahlrichs, J. Chem. Phys. 121, 10380 (2004)]. In the present contribution the extension of a genetic algorithm with the re-assignment of atom types to atom sites is proposed and tested for the search of the global minima of PtHf{sub 12} and [LaPb{sub 7}Bi{sub 7}]{sup 4−}. For both cases the (putative) global minimum is reliably found with the extended technique, which is not the case for the “pure” genetic algorithm.« less
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
Remainder Wheels and Group Theory
ERIC Educational Resources Information Center
Brenton, Lawrence
2008-01-01
Why should prospective elementary and high school teachers study group theory in college? This paper examines applications of abstract algebra to the familiar algorithm for converting fractions to repeating decimals, revealing ideas of surprising substance beneath an innocent facade.
Lee, William H K.
2016-01-01
A complex system consists of many interacting parts, generates new collective behavior through self organization, and adaptively evolves through time. Many theories have been developed to study complex systems, including chaos, fractals, cellular automata, self organization, stochastic processes, turbulence, and genetic algorithms.
Oximetry using multispectral imaging: theory and application
NASA Astrophysics Data System (ADS)
MacKenzie, Lewis E.; Harvey, Andrew R.
2018-06-01
Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.
Niu, Renjie; Fu, Chenyu; Xu, Zhiyong; Huang, Jianyuan
2016-04-29
Doctors who practice Traditional Chinese Medicine (TCM) diagnose using four methods - inspection, auscultation and olfaction, interrogation, and pulse feeling/palpation. The shape and shape changes of the moon marks on the nails are an important indication when judging the patient's health. There are a series of classical and experimental theories about moon marks in TCM, which does not have support from statistical data. To verify some experiential theories on moon mark in TCM by automatic data-processing equipment. This paper proposes the equipment that utilizes image processing technology to collect moon mark data of different target groups conveniently and quickly, building a database that combines this information with that gathered from the health and mental status questionnaire in each test. This equipment has a simple design, a low cost, and an optimized algorithm. The practice has been proven to quickly complete automatic acquisition and preservation of key data about moon marks. In the future, some conclusions will likely be obtained from these data; some changes of moon marks related to a special pathological change will be established with statistical methods.
Combinatorial and Algorithmic Rigidity: Beyond Two Dimensions
2012-12-01
problem. Manuscript, 2010. [35] G. Panina and I. Streinu. Flattening single-vertex origami : the non- expansive case. Computational Geometry : Theory and...in 2008, under the DARPA solicitation “Mathemat- ical Challenges, BAA 07-68”. It addressed Mathematical Challenge Ten: Al- gorithmic Origami and...a number of optimal algorithms and provided critical complexity analysis. The topic of algorithmic origami was successfully engaged from the same
Machine Learning for Biological Trajectory Classification Applications
NASA Technical Reports Server (NTRS)
Sbalzarini, Ivo F.; Theriot, Julie; Koumoutsakos, Petros
2002-01-01
Machine-learning techniques, including clustering algorithms, support vector machines and hidden Markov models, are applied to the task of classifying trajectories of moving keratocyte cells. The different algorithms axe compared to each other as well as to expert and non-expert test persons, using concepts from signal-detection theory. The algorithms performed very well as compared to humans, suggesting a robust tool for trajectory classification in biological applications.
Santos, José; Monteagudo, Ángel
2017-03-27
The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.
A Learning-Style Theory for Understanding Autistic Behaviors
Qian, Ning; Lipkin, Richard M.
2011-01-01
Understanding autism's ever-expanding array of behaviors, from sensation to cognition, is a major challenge. We posit that autistic and typically developing brains implement different algorithms that are better suited to learn, represent, and process different tasks; consequently, they develop different interests and behaviors. Computationally, a continuum of algorithms exists, from lookup table (LUT) learning, which aims to store experiences precisely, to interpolation (INT) learning, which focuses on extracting underlying statistical structure (regularities) from experiences. We hypothesize that autistic and typical brains, respectively, are biased toward LUT and INT learning, in low- and high-dimensional feature spaces, possibly because of their narrow and broad tuning functions. The LUT style is good at learning relationships that are local, precise, rigid, and contain little regularity for generalization (e.g., the name–number association in a phonebook). However, it is poor at learning relationships that are context dependent, noisy, flexible, and do contain regularities for generalization (e.g., associations between gaze direction and intention, language and meaning, sensory input and interpretation, motor-control signal and movement, and social situation and proper response). The LUT style poorly compresses information, resulting in inefficiency, sensory overload (overwhelm), restricted interests, and resistance to change. It also leads to poor prediction and anticipation, frequent surprises and over-reaction (hyper-sensitivity), impaired attentional selection and switching, concreteness, strong local focus, weak adaptation, and superior and inferior performances on simple and complex tasks. The spectrum nature of autism can be explained by different degrees of LUT learning among different individuals, and in different systems of the same individual. Our theory suggests that therapy should focus on training autistic LUT algorithm to learn regularities. PMID:21886617
NASA Astrophysics Data System (ADS)
Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri
2016-04-01
The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A. Lobbrecht, and R. Price (2010b), Optimization of water level monitoring network in polder systems using information theory, WATER RESOURCES RESEARCH, 46(12), W12553,10.1029/2009wr008953. [3] Alfonso, L., L. He, A. Lobbrecht, and R. Price (2013), Information theory applied to evaluate the discharge monitoring network of the Magdalena River, Journal of Hydroinformatics, 15(1), 211-228 [4] Alfonso, L., E. Ridolfi, S. Gaytan-Aguilar, F. Napolitano, and F. Russo (2014), Ensemble Entropy for Monitoring Network Design, Entropy, 16(3), 1365-1375
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
A lane line segmentation algorithm based on adaptive threshold and connected domain theory
NASA Astrophysics Data System (ADS)
Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang
2018-04-01
Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.
Algorithmes et architectures pour ordinateurs quantiques supraconducteurs
NASA Astrophysics Data System (ADS)
Blais, A.
2003-09-01
Algorithms and architectures for superconducting quantum computers Since its formulation, information theory was based, implicitly, on the laws of classical physics. Such a formulation is however incomplete because it does not take into account quantum reality. During the last twenty years, expansion of theory information to include quantum effects has known growing interest. The practical realization of a system for quantum data processing system, a quantum computer, presents however many challenges. In this book, we are interested in various aspects of these challenges. We start by presenting algorithmic concepts like optimization of quantum computations and geometric quantum computation. We then consider various designs and aspects of qubits based on Josephson junctions. In particular, an original approach to the interaction between superconducting qubits is presented. This approach is very general since it can be applied to various designs of qubits. Finally, we are interested in read-out of the superconductic flux qubits. The detector suggested here has the advantage that it is possible to uncouple it from the qubit when no measurement is in progress. Depuis sa formulation, la théorie de l'information a été basée, implicitement, sur les lois de la physique classique. Une telle formulation est toutefois incomplète puisqu'elle ne tient pas compte de la réalité quantique. Au cours des vingt dernières années, l'expansion de la théorie de l'information, de façon à englober les effets purement quantiques, a connu un intérêt grandissant. La réalisation d'un système de traitement de l'information quantique, un ordinateur quantique, présente toutefois de nombreux défis. Dans cet ouvrage, on s'intéresse à différents aspects concernant ces défis. On commence par présenter des concepts algorithmiques comme l'optimisation de calculs quantiques et le calcul quantique géométrique. Par la suite, on s'intéresse à différents designs et aspects de l'utilisation de qubits basés sur les jonctions Josephson. On présente entre autres une approche originale pour l'interaction entre qubits. Cette approche est très générale puisqu'elle peut être appliquée à différents designs de qubits. Finalement, on s'intéresse à la lecture des qubits supraconducteurs de flux. Le détecteur suggéré ici a l'avantage de pouvoir être découplé du qubit lorsqu'il n'y a pas de mesure en cours.
High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
An Image Encryption Algorithm Based on Information Hiding
NASA Astrophysics Data System (ADS)
Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu
Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.
Rausch, Tobias; Thomas, Alun; Camp, Nicola J.; Cannon-Albright, Lisa A.; Facelli, Julio C.
2008-01-01
This paper describes a novel algorithm to analyze genetic linkage data using pattern recognition techniques and genetic algorithms (GA). The method allows a search for regions of the chromosome that may contain genetic variations that jointly predispose individuals for a particular disease. The method uses correlation analysis, filtering theory and genetic algorithms (GA) to achieve this goal. Because current genome scans use from hundreds to hundreds of thousands of markers, two versions of the method have been implemented. The first is an exhaustive analysis version that can be used to visualize, explore, and analyze small genetic data sets for two marker correlations; the second is a GA version, which uses a parallel implementation allowing searches of higher-order correlations in large data sets. Results on simulated data sets indicate that the method can be informative in the identification of major disease loci and gene-gene interactions in genome-wide linkage data and that further exploration of these techniques is justified. The results presented for both variants of the method show that it can help genetic epidemiologists to identify promising combinations of genetic factors that might predispose to complex disorders. In particular, the correlation analysis of IBD expression patterns might hint to possible gene-gene interactions and the filtering might be a fruitful approach to distinguish true correlation signals from noise. PMID:18547558
The Theory of Everything and the future of life
NASA Astrophysics Data System (ADS)
Karthik, Trishank
2004-10-01
This paper is a philosophical essay on metaphysics, in which we develop a justification for Algorithmic Communication with Extraterrestrial Intelligence by considering the relationship between the Theory of Everything and the future of life or physical eschatology.
Development of a teaching system for an industrial robot using stereo vision
NASA Astrophysics Data System (ADS)
Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki
1997-12-01
The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun
2017-06-01
Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.
Ravichandran, M; Kulanthaivel, G; Chellatamilan, T
2015-01-01
Every day, huge numbers of instant tweets (messages) are published on Twitter as it is one of the massive social media for e-learners interactions. The options regarding various interesting topics to be studied are discussed among the learners and teachers through the capture of ideal sources in Twitter. The common sentiment behavior towards these topics is received through the massive number of instant messages about them. In this paper, rather than using the opinion polarity of each message relevant to the topic, authors focus on sentence level opinion classification upon using the unsupervised algorithm named bigram item response theory (BIRT). It differs from the traditional classification and document level classification algorithm. The investigation illustrated in this paper is of threefold which are listed as follows: (1) lexicon based sentiment polarity of tweet messages; (2) the bigram cooccurrence relationship using naïve Bayesian; (3) the bigram item response theory (BIRT) on various topics. It has been proposed that a model using item response theory is constructed for topical classification inference. The performance has been improved remarkably using this bigram item response theory when compared with other supervised algorithms. The experiment has been conducted on a real life dataset containing different set of tweets and topics.
Evidence for an inhibitory-control theory of the reasoning brain
Houdé, Olivier; Borst, Grégoire
2015-01-01
In this article, we first describe our general inhibitory-control theory and, then, we describe how we have tested its specific hypotheses on reasoning with brain imaging techniques in adults and children. The innovative part of this perspective lies in its attempt to come up with a brain-based synthesis of Jean Piaget’s theory on logical algorithms and Daniel Kahneman’s theory on intuitive heuristics. PMID:25852528
Information Clustering Based on Fuzzy Multisets.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki
2003-01-01
Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…
Do All Roads Lead to Rome? ("or" Reductions for Dummy Travelers)
ERIC Educational Resources Information Center
Kilpelainen, Pekka
2010-01-01
Reduction is a central ingredient of computational thinking, and an important tool in algorithm design, in computability theory, and in complexity theory. Reduction has been recognized to be a difficult topic for students to learn. Previous studies on teaching reduction have concentrated on its use in special courses on the theory of computing. As…
Local Higher-Order Graph Clustering
Yin, Hao; Benson, Austin R.; Leskovec, Jure; Gleich, David F.
2018-01-01
Local graph clustering methods aim to find a cluster of nodes by exploring a small region of the graph. These methods are attractive because they enable targeted clustering around a given seed node and are faster than traditional global graph clustering methods because their runtime does not depend on the size of the input graph. However, current local graph partitioning methods are not designed to account for the higher-order structures crucial to the network, nor can they effectively handle directed networks. Here we introduce a new class of local graph clustering methods that address these issues by incorporating higher-order network information captured by small subgraphs, also called network motifs. We develop the Motif-based Approximate Personalized PageRank (MAPPR) algorithm that finds clusters containing a seed node with minimal motif conductance, a generalization of the conductance metric for network motifs. We generalize existing theory to prove the fast running time (independent of the size of the graph) and obtain theoretical guarantees on the cluster quality (in terms of motif conductance). We also develop a theory of node neighborhoods for finding sets that have small motif conductance, and apply these results to the case of finding good seed nodes to use as input to the MAPPR algorithm. Experimental validation on community detection tasks in both synthetic and real-world networks, shows that our new framework MAPPR outperforms the current edge-based personalized PageRank methodology. PMID:29770258
Beyond the "c" and the "x": Learning with algorithms in massive open online courses (MOOCs)
NASA Astrophysics Data System (ADS)
Knox, Jeremy
2018-02-01
This article examines how algorithms are shaping student learning in massive open online courses (MOOCs). Following the dramatic rise of MOOC platform organisations in 2012, over 4,500 MOOCs have been offered to date, in increasingly diverse languages, and with a growing requirement for fees. However, discussions of learning in MOOCs remain polarised around the "xMOOC" and "cMOOC" designations. In this narrative, the more recent extended or platform MOOC ("xMOOC") adopts a broadcast pedagogy, assuming a direct transmission of information to its largely passive audience (i.e. a teacher-centred approach), while the slightly older connectivist model ("cMOOC") offers only a simplistic reversal of the hierarchy, posing students as highly motivated, self-directed and collaborative learners (i.e. a learner-centred approach). The online nature of both models generates data (e.g. on how many times a particular resource was viewed, or the ways in which participants communicated with each other) which MOOC providers use for analysis, albeit only after these data have been selectively processed. Central to many learning analytics approaches is the desire to predict students' future behaviour. Educators need to be aware that MOOC learning is not just about teachers and students, but that it also involves algorithms: instructions which perform automated calculations on data. Education is becoming embroiled in an "algorithmic culture" that defines educational roles, forecasts attainment, and influences pedagogy. Established theories of learning appear wholly inadequate in addressing the agential role of algorithms in the educational domain of the MOOC. This article identifies and examines four key areas where algorithms influence the activities of the MOOC: (1) data capture and discrimination; (2) calculated learners; (3) feedback and entanglement; and (4) learning with algorithms. The article concludes with a call for further research in these areas to surface a critical discourse around the use of algorithms in MOOC education and beyond.
2010-01-01
Background Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. Results We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. Conclusions The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to the positional constraints. For a fair comparison, we also apply our proposed method without positional constraints. This comparison shows that these results contain less crossings than those of the spring embedder algorithm. We also compared layouts of the proposed algorithm with and without compartment update and verified that latter can reach better local optima. PMID:20565884
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
Adaptive Two Dimensional RLS (Recursive Least Squares) Algorithms
1989-03-01
in Monterey wonderful. IX I. INTRODUCTION Adaptive algorithms have been used successfully for many years in a wide range of digital signal...SIMULATION RESULTS The 2-D FRLS algorithm was tested both on computer-generated data and on digitized images. For a baseline reference the 2-D L:rv1S...Alexander, S. T. Adaptivt Signal Processing: Theory and Applications. Springer- Verlag, New York. 1986. 7. Bellanger, Maurice G. Adaptive Digital
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Novel Methods in Disease Biogeography: A Case Study with Heterosporosis
Escobar, Luis E.; Qiao, Huijie; Lee, Christine; Phelps, Nicholas B. D.
2017-01-01
Disease biogeography is currently a promising field to complement epidemiology, and ecological niche modeling theory and methods are a key component. Therefore, applying the concepts and tools from ecological niche modeling to disease biogeography and epidemiology will provide biologically sound and analytically robust descriptive and predictive analyses of disease distributions. As a case study, we explored the ecologically important fish disease Heterosporosis, a relatively poorly understood disease caused by the intracellular microsporidian parasite Heterosporis sutherlandae. We explored two novel ecological niche modeling methods, the minimum-volume ellipsoid (MVE) and the Marble algorithm, which were used to reconstruct the fundamental and the realized ecological niche of H. sutherlandae, respectively. Additionally, we assessed how the management of occurrence reports can impact the output of the models. Ecological niche models were able to reconstruct a proxy of the fundamental and realized niche for this aquatic parasite, identifying specific areas suitable for Heterosporosis. We found that the conceptual and methodological advances in ecological niche modeling provide accessible tools to update the current practices of spatial epidemiology. However, careful data curation and a detailed understanding of the algorithm employed are critical for a clear definition of the assumptions implicit in the modeling process and to ensure biologically sound forecasts. In this paper, we show how sensitive MVE is to the input data, while Marble algorithm may provide detailed forecasts with a minimum of parameters. We showed that exploring algorithms of different natures such as environmental clusters, climatic envelopes, and logistic regressions (e.g., Marble, MVE, and Maxent) provide different scenarios of potential distribution. Thus, no single algorithm should be used for disease mapping. Instead, different algorithms should be employed for a more informed and complete understanding of the pathogen or parasite in question. PMID:28770215
Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing
NASA Astrophysics Data System (ADS)
Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan
2018-04-01
Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.