An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
Teaching Non-Recursive Binary Searching: Establishing a Conceptual Framework.
ERIC Educational Resources Information Center
Magel, E. Terry
1989-01-01
Discusses problems associated with teaching non-recursive binary searching in computer language classes, and describes a teacher-directed dialog based on dictionary use that helps students use their previous searching experiences to conceptualize the binary search process. Algorithmic development is discussed and appropriate classroom discussion…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helinski, Ryan
This Python package provides high-performance implementations of the functions and examples presented in "BiEntropy - The Approximate Entropy of a Finite Binary String" by Grenville J. Croll, presented at ANPA 34 in 2013. https://arxiv.org/abs/1305.0954 According to the paper, BiEntropy is "a simple algorithm which computes the approximate entropy of a finite binary string of arbitrary length" using "a weighted average of the Shannon Entropies of the string and all but the last binary derivative of the string."
Rényi entropy measure of noise-aided information transmission in a binary channel.
Chapeau-Blondeau, François; Rousseau, David; Delahaies, Agnès
2010-05-01
This paper analyzes a binary channel by means of information measures based on the Rényi entropy. The analysis extends, and contains as a special case, the classic reference model of binary information transmission based on the Shannon entropy measure. The extended model is used to investigate further possibilities and properties of stochastic resonance or noise-aided information transmission. The results demonstrate that stochastic resonance occurs in the information channel and is registered by the Rényi entropy measures at any finite order, including the Shannon order. Furthermore, in definite conditions, when seeking the Rényi information measures that best exploit stochastic resonance, then nontrivial orders differing from the Shannon case usually emerge. In this way, through binary information transmission, stochastic resonance identifies optimal Rényi measures of information differing from the classic Shannon measure. A confrontation of the quantitative information measures with visual perception is also proposed in an experiment of noise-aided binary image transmission.
Binary Disassembly Block Coverage by Symbolic Execution vs. Recursive Descent
2012-03-01
explores the effectiveness of symbolic execution on packed or obfuscated samples of the same binaries to generate a model-based evaluation of success...24 2.3.4.1 Packing . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.4.2 Techniques...inner workings of UPX (Universal Packer for eXecutables), a common packing tool, on a Windows binary. Image source: GFC08 . . . . . . . . . . . 25 3.1
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Exact analytical solution of irreversible binary dynamics on networks.
Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J
2018-03-01
In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.
Exact analytical solution of irreversible binary dynamics on networks
NASA Astrophysics Data System (ADS)
Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J.
2018-03-01
In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.
Binary recursive partitioning: background, methods, and application to psychology.
Merkle, Edgar C; Shaffer, Victoria A
2011-02-01
Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
Entropy production and optimization of geothermal power plants
NASA Astrophysics Data System (ADS)
Michaelides, Efstathios E.
2012-09-01
Geothermal power plants are currently producing reliable and low-cost, base load electricity. Three basic types of geothermal power plants are currently in operation: single-flashing, dual-flashing, and binary power plants. Typically, the single-flashing and dual-flashing geothermal power plants utilize geothermal water (brine) at temperatures in the range of 550-430 K. Binary units utilize geothermal resources at lower temperatures, typically 450-380 K. The entropy production in the various components of the three types of geothermal power plants determines the efficiency of the plants. It is axiomatic that a lower entropy production would improve significantly the energy utilization factor of the corresponding power plant. For this reason, the entropy production in the major components of the three types of geothermal power plants has been calculated. It was observed that binary power plants generate the lowest amount of entropy and, thus, convert the highest rate of geothermal energy into mechanical energy. The single-flashing units generate the highest amount of entropy, primarily because they re-inject fluid at relatively high temperature. The calculations for entropy production provide information on the equipment where the highest irreversibilities occur, and may be used to optimize the design of geothermal processes in future geothermal power plants and thermal cycles used for the harnessing of geothermal energy.
Algorithmic information theory and the hidden variable question
NASA Technical Reports Server (NTRS)
Fuchs, Christopher
1992-01-01
The admissibility of certain nonlocal hidden-variable theories are explained via information theory. Consider a pair of Stern-Gerlach devices with fixed nonparallel orientations that periodically perform spin measurements on identically prepared pairs of electrons in the singlet spin state. Suppose the outcomes are recorded as binary strings l and r (with l sub n and r sub n denoting their n-length prefixes). The hidden-variable theories considered here require that there exists a recursive function which may be used to transform l sub n into r sub n for any n. This note demonstrates that such a theory cannot reproduce all the statistical predictions of quantum mechanics. Specifically, consider an ensemble of outcome pairs (l,r). From the associated probability measure, the Shannon entropies H sub n and H bar sub n for strings l sub n and pairs (l sub n, r sub n) may be formed. It is shown that such a theory requires that the absolute value of H bar sub n - H sub n be bounded - contrasting the quantum mechanical prediction that it grow with n.
Entropy of finite random binary sequences with weak long-range correlations.
Melnik, S S; Usatenko, O V
2014-11-01
We study the N-step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
Entropy of finite random binary sequences with weak long-range correlations
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2014-11-01
We study the N -step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
Experimental evidence for excess entropy discontinuities in glass-forming solutions.
Lienhard, Daniel M; Zobrist, Bernhard; Zuend, Andreas; Krieger, Ulrich K; Peter, Thomas
2012-02-21
Glass transition temperatures T(g) are investigated in aqueous binary and multi-component solutions consisting of citric acid, calcium nitrate (Ca(NO(3))(2)), malonic acid, raffinose, and ammonium bisulfate (NH(4)HSO(4)) using a differential scanning calorimeter. Based on measured glass transition temperatures of binary aqueous mixtures and fitted binary coefficients, the T(g) of multi-component systems can be predicted using mixing rules. However, the experimentally observed T(g) in multi-component solutions show considerable deviations from two theoretical approaches considered. The deviations from these predictions are explained in terms of the molar excess mixing entropy difference between the supercooled liquid and glassy state at T(g). The multi-component mixtures involve contributions to these excess mixing entropies that the mixing rules do not take into account. © 2012 American Institute of Physics
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Binarized cross-approximate entropy in crowdsensing environment.
Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana
2017-01-01
Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
Kevlar: Transitioning Helix from Research to Practice
2015-04-01
protective transformations are applied to application binaries before they are deployed. Salient features of Kevlar include applying high- entropy ...variety of classes. Kevlar uses novel, fine-grained, high- entropy diversification transformations to prevent an attacker from successfully exploiting...Kevlar include applying high- entropy randomization techniques, automated program repairs, leveraging highly-optimized virtual machine technology, and in
A measurement of disorder in binary sequences
NASA Astrophysics Data System (ADS)
Gong, Longyan; Wang, Haihong; Cheng, Weiwen; Zhao, Shengmei
2015-03-01
We propose a complex quantity, AL, to characterize the degree of disorder of L-length binary symbolic sequences. As examples, we respectively apply it to typical random and deterministic sequences. One kind of random sequences is generated from a periodic binary sequence and the other is generated from the logistic map. The deterministic sequences are the Fibonacci and Thue-Morse sequences. In these analyzed sequences, we find that the modulus of AL, denoted by |AL | , is a (statistically) equivalent quantity to the Boltzmann entropy, the metric entropy, the conditional block entropy and/or other quantities, so it is a useful quantitative measure of disorder. It can be as a fruitful index to discern which sequence is more disordered. Moreover, there is one and only one value of |AL | for the overall disorder characteristics. It needs extremely low computational costs. It can be easily experimentally realized. From all these mentioned, we believe that the proposed measure of disorder is a valuable complement to existing ones in symbolic sequences.
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
2012-01-01
Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225
Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi
2008-05-01
We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.
NASA Technical Reports Server (NTRS)
Owre, Sam; Shankar, Natarajan
1997-01-01
PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.
NASA Astrophysics Data System (ADS)
Hinze, Ralf
Programmers happily use induction to prove properties of recursive programs. To show properties of corecursive programs they employ coinduction, but perhaps less enthusiastically. Coinduction is often considered a rather low-level proof method, in particular, as it departs quite radically from equational reasoning. Corecursive programs are conveniently defined using recursion equations. Suitably restricted, these equations possess unique solutions. Uniqueness gives rise to a simple and attractive proof technique, which essentially brings equational reasoning to the coworld. We illustrate the approach using two major examples: streams and infinite binary trees. Both coinductive types exhibit a rich structure: they are applicative functors or idioms, and they can be seen as memo-tables or tabulations. We show that definitions and calculations benefit immensely from this additional structure.
NASA Astrophysics Data System (ADS)
Goradia, Shantilal
2015-10-01
We modify Newtonian gravity to probabilistic quantum mechanical gravity to derive strong coupling. If this approach is valid, we should be able to extend it to the physical body (life) as follows. Using Boltzmann equation, we get the entropy of the universe (137) as if its reciprocal, the fine structure constant (ALPHA), is the hidden candidate representing the negative entropy of the universe which is indicative of the binary information as its basis (http://www.arXiv.org/pdf/physics0210040v5). Since ALPHA relates to cosmology, it must relate to molecular biology too, with the binary system as the fundamental source of information for the nucleotides of the DNA as implicit in the book by the author: ``Quantum Consciousness - The Road to Reality.'' We debate claims of anthropic principle based on the negligible variation of ALPHA and throw light on thermodynamics. We question constancy of G in multiple ways.
Study of thermodynamic properties of liquid binary alloys by a pseudopotential method
NASA Astrophysics Data System (ADS)
Vora, Aditya M.
2010-11-01
On the basis of the Percus-Yevick hard-sphere model as a reference system and the Gibbs-Bogoliubov inequality, a thermodynamic perturbation method is applied with the use of the well-known model potential. By applying a variational method, the hard-core diameters are found which correspond to a minimum free energy. With this procedure, the thermodynamic properties such as the internal energy, entropy, Helmholtz free energy, entropy of mixing, and heat of mixing are computed for liquid NaK binary systems. The influence of the local-field correction functions of Hartree, Taylor, Ichimaru-Utsumi, Farid-Heine-Engel-Robertson, and Sarkar-Sen-Haldar-Roy is also investigated. The computed excess entropy is in agreement with available experimental data in the case of liquid alloys, whereas the agreement for the heat of mixing is poor. This may be due to the sensitivity of the latter to the potential parameters and dielectric function.
Modeling the Overalternating Bias with an Asymmetric Entropy Measure
Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea
2016-01-01
Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418
Thermodynamics of Liquid Alkali Metals and Their Binary Alloys
NASA Astrophysics Data System (ADS)
Thakor, P. B.; Patel, Minal H.; Gajjar, P. N.; Jani, A. R.
2009-07-01
The theoretical investigation of thermodynamic properties like internal energy, entropy, Helmholtz free energy, heat of mixing (ΔE) and entropy of mixing (ΔS) of liquid alkali metals and their binary alloys are reported in the present paper. The effect of concentration on the thermodynamic properties of Ac1Bc2 alloy of the alkali-alkali elements is investigated and reported for the first time using our well established local pseudopotential. To investigate influence of exchange and correlation effects, we have used five different local field correction functions viz; Hartree(H), Taylor(T), Ichimaru and Utsumi(IU), Farid et al. (F) and Sarkar et al. (S). The increase of concentration C2, increases the internal energy and Helmholtz free energy of liquid alloy Ac1Bc2. The behavior of present computation is not showing any abnormality in the outcome and hence confirms the applicability of our model potential in explaining the thermodynamics of liquid binary alloys.
A new complexity measure for time series analysis and classification
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth
2013-07-01
Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).
NASA Astrophysics Data System (ADS)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Detecting Genetic Interactions for Quantitative Traits Using m-Spacing Entropy Measure
Yee, Jaeyong; Kwon, Min-Seok; Park, Taesung; Park, Mira
2015-01-01
A number of statistical methods for detecting gene-gene interactions have been developed in genetic association studies with binary traits. However, many phenotype measures are intrinsically quantitative and categorizing continuous traits may not always be straightforward and meaningful. Association of gene-gene interactions with an observed distribution of such phenotypes needs to be investigated directly without categorization. Information gain based on entropy measure has previously been successful in identifying genetic associations with binary traits. We extend the usefulness of this information gain by proposing a nonparametric evaluation method of conditional entropy of a quantitative phenotype associated with a given genotype. Hence, the information gain can be obtained for any phenotype distribution. Because any functional form, such as Gaussian, is not assumed for the entire distribution of a trait or a given genotype, this method is expected to be robust enough to be applied to any phenotypic association data. Here, we show its use to successfully identify the main effect, as well as the genetic interactions, associated with a quantitative trait. PMID:26339620
Time-series analysis of foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2013-08-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.
Optimization of binary thermodynamic and phase diagram data
NASA Astrophysics Data System (ADS)
Bale, Christopher W.; Pelton, A. D.
1983-03-01
An optimization technique based upon least squares regression is presented to permit the simultaneous analysis of diverse experimental binary thermodynamic and phase diagram data. Coefficients of polynomial expansions for the enthalpy and excess entropy of binary solutions are obtained which can subsequently be used to calculate the thermodynamic properties or the phase diagram. In an interactive computer-assisted analysis employing this technique, one can critically analyze a large number of diverse data in a binary system rapidly, in a manner which is fully self-consistent thermodynamically. Examples of applications to the Bi-Zn, Cd-Pb, PbCl2-KCl, LiCl-FeCl2, and Au-Ni binary systems are given.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
2017-08-21
distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured
Time-series analysis of multiple foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2018-01-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in multiple foreign exchange rates. The time-dependent pattern entropy of 7 foreign exchange rates (AUD/USD, CAD/USD, CHF/USD, EUR/USD, GBP/USD, JPY/USD, and NZD/USD) was found to be high in the long period after the Lehman shock, and be low in the long period after Mar 2012. We compared the correlation matrix between exchange rates in periods of high and low of the time-dependent pattern entropy.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Discrimination of isotrigon textures using the Rényi entropy of Allan variances.
Gabarda, Salvador; Cristóbal, Gabriel
2008-09-01
We present a computational algorithm for isotrigon texture discrimination. The aim of this method consists in discriminating isotrigon textures against a binary random background. The extension of the method to the problem of multitexture discrimination is considered as well. The method relies on the fact that the information content of time or space-frequency representations of signals, including images, can be readily analyzed by means of generalized entropy measures. In such a scenario, the Rényi entropy appears as an effective tool, given that Rényi measures can be used to provide information about a local neighborhood within an image. Localization is essential for comparing images on a pixel-by-pixel basis. Discrimination is performed through a local Rényi entropy measurement applied on a spatially oriented 1-D pseudo-Wigner distribution (PWD) of the test image. The PWD is normalized so that it may be interpreted as a probability distribution. Prior to the calculation of the texture's PWD, a preprocessing filtering step replaces the original texture with its localized spatially oriented Allan variances. The anisotropic structure of the textures, as revealed by the Allan variances, turns out to be crucial later to attain a high discrimination by the extraction of Rényi entropy measures. The method has been empirically evaluated with a family of isotrigon textures embedded in a binary random background. The extension to the case of multiple isotrigon mosaics has also been considered. Discrimination results are compared with other existing methods.
Renyi entropy for local quenches in 2D CFT from numerical conformal blocks
NASA Astrophysics Data System (ADS)
Kusuki, Yuya; Takayanagi, Tadashi
2018-01-01
We study the time evolution of Renyi entanglement entropy for locally excited states in two dimensional large central charge CFTs. It generically shows a logarithmical growth and we compute the coefficient of log t term. Our analysis covers the entire parameter regions with respect to the replica number n and the conformal dimension h O of the primary operator which creates the excitation. We numerically analyse relevant vacuum conformal blocks by using Zamolodchikov's recursion relation. We find that the behavior of the conformal blocks in two dimensional CFTs with a central charge c, drastically changes when the dimensions of external primary states reach the value c/32. In particular, when h O ≥ c/32 and n ≥ 2, we find a new universal formula Δ {S}_A^{(n)}˜eq nc/24(n-1) log t. Our numerical results also confirm existing analytical results using the HHLL approximation.
Criteria for predicting the formation of single-phase high-entropy alloys
Troparevsky, M Claudia; Morris, James R..; Kent, Paul R.; ...
2015-03-15
High entropy alloys constitute a new class of materials whose very existence poses fundamental questions. Originally thought to be stabilized by the large entropy of mixing, these alloys have attracted attention due to their potential applications, yet no model capable of robustly predicting which combinations of elements will form a single-phase currently exists. Here we propose a model that, through the use of high-throughput computation of the enthalpies of formation of binary compounds, is able to confirm all known high-entropy alloys while rejecting similar alloys that are known to form multiple phases. Despite the increasing entropy, our model predicts thatmore » the number of potential single-phase multicomponent alloys decreases with an increasing number of components: out of more than two million possible 7-component alloys considered, fewer than twenty single-phase alloys are likely.« less
Entropy of level-cut random Gaussian structures at different volume fractions
NASA Astrophysics Data System (ADS)
Marčelja, Stjepan
2017-10-01
Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.
NASA Astrophysics Data System (ADS)
Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.
2016-02-01
In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
Klein, M D; Rabbani, A B; Rood, K D; Durham, T; Rosenberg, N M; Bahr, M J; Thomas, R L; Langenburg, S E; Kuhns, L R
2001-09-01
The authors compared 3 quantitative methods for assisting clinicians in the differential diagnosis of abdominal pain in children, where the most common important endpoint is whether the patient has appendicitis. Pretest probability in different age and sex groups were determined to perform Bayesian analysis, binary logistic regression was used to determine which variables were statistically significantly likely to contribute to a diagnosis, and recursive partitioning was used to build decision trees with quantitative endpoints. The records of all children (1,208) seen at a large urban emergency department (ED) with a chief complaint of abdominal pain were immediately reviewed retrospectively (24 to 72 hours after the encounter). Attempts were made to contact all the patients' families to determine an accurate final diagnosis. A total of 1,008 (83%) families were contacted. Data were analyzed by calculation of the posttest probability, recursive partitioning, and binary logistic regression. In all groups the most common diagnosis was abdominal pain (ICD-9 Code 789). After this, however, the order of the most common final diagnoses for abdominal pain varied significantly. The entire group had a pretest probability of appendicitis of 0.06. This varied with age and sex from 0.02 in boys 2 to 5 years old to 0.16 in boys older than 12 years. In boys age 5 to 12, recursive partitioning and binary logistic regression agreed on guarding and anorexia as important variables. Guarding and tenderness were important in girls age 5 to 12. In boys age greater than 12, both agreed on guarding and anorexia. Using sensitivities and specificities from the literature, computed tomography improved the posttest probability for the group from.06 to.33; ultrasound improved it from.06 to.48; and barium enema improved it from.06 to.58. Knowing the pretest probabilities in a specific population allows the physician to evaluate the likely diagnoses first. Other quantitative methods can help judge how much importance a certain criterion should have in the decision making and how much a particular test is likely to influence the probability of a correct diagnosis. It now should be possible to make these sophisticated quantitative methods readily available to clinicians via the computer. Copyright 2001 by W.B. Saunders Company.
Practical low-cost visual communication using binary images for deaf sign language.
Manoranjan, M D; Robinson, J A
2000-03-01
Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Merging Clusters, Cluster Outskirts, and Large Scale Filaments
NASA Astrophysics Data System (ADS)
Randall, Scott; Alvarez, Gabriella; Bulbul, Esra; Jones, Christine; Forman, William; Su, Yuanyuan; Miller, Eric D.; Bourdin, Herve; Scott Randall
2018-01-01
Recent X-ray observations of the outskirts of clusters show that entropy profiles of the intracluster medium (ICM) generally flatten and lie below what is expected from purely gravitational structure formation near the cluster's virial radius. Possible explanations include electron/ion non-equilibrium, accretion shocks that weaken during cluster formation, and the presence of unresolved cool gas clumps. Some of these mechanisms are expected to correlate with large scale structure (LSS), such that the entropy is lower in regions where the ICM interfaces with LSS filaments and, presumably, the warm-hot intergalactic medium (WHIM). Major, binary cluster mergers are expected to take place at the intersection of LSS filaments, with the merger axis initially oriented along a filament. We present results from deep X-ray observations of the virialization regions of binary, early-stage merging clusters, including a possible detection of the dense end of the WHIM along a LSS filament.
Mixing and electronic entropy contributions to thermal energy storage in low melting point alloys
NASA Astrophysics Data System (ADS)
Shamberger, Patrick J.; Mizuno, Yasushi; Talapatra, Anjana A.
2017-07-01
Melting of crystalline solids is associated with an increase in entropy due to an increase in configurational, rotational, and other degrees of freedom of a system. However, the magnitude of chemical mixing and electronic degrees of freedom, two significant contributions to the entropy of fusion, remain poorly constrained, even in simple 2 and 3 component systems. Here, we present experimentally measured entropies of fusion in the Sn-Pb-Bi and In-Sn-Bi ternary systems, and decouple mixing and electronic contributions. We demonstrate that electronic effects remain the dominant contribution to the entropy of fusion in multi-component post-transition metal and metalloid systems, and that excess entropy of mixing terms can be equal in magnitude to ideal mixing terms, causing regular solution approximations to be inadequate in the general case. Finally, we explore binary eutectic systems using mature thermodynamic databases, identifying eutectics containing at least one semiconducting intermetallic phase as promising candidates to exceed the entropy of fusion of monatomic endmembers, while simultaneously maintaining low melting points. These results have significant implications for engineering high-thermal conductivity metallic phase change materials to store thermal energy.
A Method to Predict the Structure and Stability of RNA/RNA Complexes.
Xu, Xiaojun; Chen, Shi-Jie
2016-01-01
RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.
Phase retrieval from intensity-only data by relative entropy minimization.
Deming, Ross W
2007-11-01
A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.
A particle filter for multi-target tracking in track before detect context
NASA Astrophysics Data System (ADS)
Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud
2016-10-01
The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.
Monotonic entropy growth for a nonlinear model of random exchanges.
Apenko, S M
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
Monotonic entropy growth for a nonlinear model of random exchanges
NASA Astrophysics Data System (ADS)
Apenko, S. M.
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Calibration of short rate term structure models from bid-ask coupon bond prices
NASA Astrophysics Data System (ADS)
Gomes-Gonçalves, Erika; Gzyl, Henryk; Mayoral, Silvia
2018-02-01
In this work we use the method of maximum entropy in the mean to provide a model free, non-parametric methodology that uses only market data to provide the prices of the zero coupon bonds, and then, a term structure of the short rates. The data used consists of the prices of the bid-ask ranges of a few coupon bonds quoted in the market. The prices of the zero coupon bonds obtained in the first stage, are then used as input to solve a recursive set of equations to determine a binomial recombinant model of the short term structure of the interest rates.
NASA Astrophysics Data System (ADS)
Lalneihpuii, R.; Shrivastava, Ruchi; Mishra, Raj Kumar
2018-05-01
Using statistical mechanical model with square-well (SW) interatomic potential within the frame work of mean spherical approximation, we determine the composition dependent microscopic correlation functions, interdiffusion coefficients, surface tension and chemical ordering in Ag-Cu melts. Further Dzugutov universal scaling law of normalized diffusion is verified with SW potential in binary mixtures. We find that the excess entropy scaling law is valid for SW binary melts. The partial and total structure factors in the attractive and repulsive regions of the interacting potential are evaluated and then Fourier transformed to get partial and total radial distribution functions. A good agreement between theoretical and experimental values for total structure factor and the reduced radial distribution function are observed, which consolidates our model calculations. The well-known Bhatia-Thornton correlation functions are also computed for Ag-Cu melts. The concentration-concentration correlations in the long wavelength limit in liquid Ag-Cu alloys have been analytically derived through the long wavelength limit of partial correlation functions and apply it to demonstrate the chemical ordering and interdiffusion coefficients in binary liquid alloys. We also investigate the concentration dependent viscosity coefficients and surface tension using the computed diffusion data in these alloys. Our computed results for structure, transport and surface properties of liquid Ag-Cu alloys obtained with square-well interatomic interaction are fully consistent with their corresponding experimental values.
Organic alloy systems suitable for the investigation of regular binary and ternary eutectic growth
NASA Astrophysics Data System (ADS)
Sturz, L.; Witusiewicz, V. T.; Hecht, U.; Rex, S.
2004-09-01
Transparent organic alloys showing a plastic crystal phase were investigated experimentally using differential scanning calorimetry and directional solidification with respect to find a suitable model system for regular ternary eutectic growth. The temperature, enthalpy and entropy of phase transitions have been determined for a number of pure substances. A distinction of substances with and without plastic crystal phases was made from their entropy of melting. Binary phase diagrams were determined for selected plastic crystal alloys with the aim to identify eutectic reactions. Examples for lamellar and rod-like eutectic solidification microstructures in binary systems are given. The system (D)Camphor-Neopentylglycol-Succinonitrile is identified as a system that exhibits, among others, univariant and a nonvariant eutectic reaction. The ternary eutectic alloy close to the nonvariant eutectic composition solidifies with a partially faceted solid-liquid interface. However, by adding a small amount of Amino-Methyl-Propanediol (AMPD), the temperature of the nonvariant eutectic reaction and of the solid state transformation from plastic to crystalline state are shifted such, that regular eutectic growth with three distinct nonfaceted phases is observed in univariant eutectic reaction for the first time. The ternary phase diagram and examples for eutectic microstructures in the ternary and the quaternary eutectic alloy are given.
Prediction of A2 to B2 Phase Transition in the High Entropy Alloy Mo-Nb-Ta-W
NASA Astrophysics Data System (ADS)
Huhn, William; Widom, Michael
2014-03-01
In this talk we show that an effective Hamiltonian fit with first principles calculations predicts an order/disorder transition occurs in the high entropy alloy Mo-Nb-Ta-W. Using the Alloy Theoretic Automated Toolset, we find T=0K enthalpies of formation for all binaries containing Mo, Nb, Ta, and W, and in particular we find the stable structures for binaries at equiatomic concentrations are close in energy to the associated B2 structure, suggesting that at intermediate temperatures a B2 phase is stabilized in Mo-Nb-Ta-W. Our ``hybrid Monte Carlo/molecular dynamics'' results for the Mo-Nb-Ta-W system are analyzed to identify certain preferred chemical bonding types. A mean field free energy model incorporating nearest neighbor bonds will be presented, allowing us to predict the mechanism of the order/disorder transition. We find the temperature evolution of the system is driven by strong Mo-Ta bonding. Comparison of the free energy model and our MC/MD results suggest the existence of additional low-temperature phase transitions in the system likely ending with phase segregation into binary phases. We would like to thank DOD-DTRA for funding this research under contract number DTRA-11-1-0064.
Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi
2017-07-01
Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.
Dynamics and computation in functional shifts
NASA Astrophysics Data System (ADS)
Namikawa, Jun; Hashimoto, Takashi
2004-07-01
We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.
Hierarchically self-assembled hexagonal honeycomb and kagome superlattices of binary 1D colloids.
Lim, Sung-Hwan; Lee, Taehoon; Oh, Younghoon; Narayanan, Theyencheri; Sung, Bong June; Choi, Sung-Min
2017-08-25
Synthesis of binary nanoparticle superlattices has attracted attention for a broad spectrum of potential applications. However, this has remained challenging for one-dimensional nanoparticle systems. In this study, we investigate the packing behavior of one-dimensional nanoparticles of different diameters into a hexagonally packed cylindrical micellar system and demonstrate that binary one-dimensional nanoparticle superlattices of two different symmetries can be obtained by tuning particle diameter and mixing ratios. The hexagonal arrays of one-dimensional nanoparticles are embedded in the honeycomb lattices (for AB 2 type) or kagome lattices (for AB 3 type) of micellar cylinders. The maximization of free volume entropy is considered as the main driving force for the formation of superlattices, which is well supported by our theoretical free energy calculations. Our approach provides a route for fabricating binary one-dimensional nanoparticle superlattices and may be applicable for inorganic one-dimensional nanoparticle systems.Binary mixtures of 1D particles are rarely observed to cooperatively self-assemble into binary superlattices, as the particle types separate into phases. Here, the authors design a system that avoids phase separation, obtaining binary superlattices with different symmetries by simply tuning the particle diameter and mixture composition.
NASA Astrophysics Data System (ADS)
Sekiguchi, Yuichiro; Kiuchi, Kenta; Kyutoku, Koutarou; Shibata, Masaru; Taniguchi, Keisuke
2016-06-01
We perform neutrino radiation-hydrodynamics simulations for the merger of asymmetric binary neutron stars in numerical relativity. Neutron stars are modeled by soft and moderately stiff finite-temperature equations of state (EOS). We find that the properties of the dynamical ejecta such as the total mass, neutron richness profile, and specific entropy profile depend on the mass ratio of the binary systems for a given EOS in a unique manner. For a soft EOS (SFHo), the total ejecta mass depends weakly on the mass ratio, but the average of electron number per baryon (Ye ) and specific entropy (s ) of the ejecta decreases significantly with the increase of the degree of mass asymmetry. For a stiff EOS (DD2), with the increase of the mass asymmetry degree, the total ejecta mass significantly increases while the average of Ye and s moderately decreases. We find again that only for the SFHo, the total ejecta mass exceeds 0.01 M⊙ irrespective of the mass ratio chosen in this paper. The ejecta have a variety of electron number per baryon with an average approximately between Ye˜0.2 and ˜0.3 irrespective of the EOS employed, which is well suited for the production of the rapid neutron capture process heavy elements (second and third peaks), although its averaged value decreases with the increase of the degree of mass asymmetry.
Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650
NASA Astrophysics Data System (ADS)
Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.
2018-04-01
We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.
The design of dual-mode complex signal processors based on quadratic modular number codes
NASA Astrophysics Data System (ADS)
Jenkins, W. K.; Krogmeier, J. V.
1987-04-01
It has been known for a long time that quadratic modular number codes admit an unusual representation of complex numbers which leads to complete decoupling of the real and imaginary channels, thereby simplifying complex multiplication and providing error isolation between the real and imaginary channels. This paper first presents a tutorial review of the theory behind the different types of complex modular rings (fields) that result from particular parameter selections, and then presents a theory for a 'dual-mode' complex signal processor based on the choice of augmented power-of-2 moduli. It is shown how a diminished-1 binary code, used by previous designers for the realization of Fermat number transforms, also leads to efficient realizations for dual-mode complex arithmetic for certain augmented power-of-2 moduli. Then a design is presented for a recursive complex filter based on a ROM/ACCUMULATOR architecture and realized in an augmented power-of-2 quadratic code, and a computer-generated example of a complex recursive filter is shown to illustrate the principles of the theory.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Logistic Map for Cancellable Biometrics
NASA Astrophysics Data System (ADS)
Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr
2017-08-01
This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Unbiased All-Optical Random-Number Generator
NASA Astrophysics Data System (ADS)
Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja
2017-10-01
The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
Hierarchical colorant-based direct binary search halftoning.
He, Zhen
2010-07-01
Colorant-based direct binary search (CB-DBS) halftoning proposed in provides an image quality benchmark for dispersed-dot halftoning algorithms. The objective of this paper is to further push the image quality limit. An algorithm called hierarchical colorant-based direct binary search (HCB-DBS) is developed in this paper. By appropriately integrating yellow colorant into dot-overlapping and dot-positioning controls, it is demonstrated that HCB-DBS can achieve better halftone texture of both individual and joint dot-color planes, without compromising the dot distribution of more visible halftone of cyan and magenta colorants. The input color specification is first converted from colorant space to dot-color space with minimum brightness variation principle for full dot-overlapping control. The dot-colors are then split into groups based upon dot visibility. Hierarchical monochrome DBS halftoning is applied to make dot-positioning decision for each group, constrained on the already generated halftone of the groups with higher priority. And dot-coloring is decided recursively with joint monochrome DBS halftoning constrained on the related total dot distribution. Experiments show HCB-DBS improves halftone texture for both individual and joint dot-color planes. And it reduces the halftone graininess and free of color mottle artifacts, comparing to CB-DBS.
High-entropy fireballs and jets in gamma-ray burst sources
NASA Technical Reports Server (NTRS)
Meszaros, P.; Rees, M. J.
1992-01-01
Two mechanisms whereby compact coalescing binaries can produce relatively 'clean' fireballs via neutrino-antineutrino annihilation are proposed. Preejected mass due to tidal heating will collimate the fireball into jets. The resulting anisotropic gamma-ray emission can be efficient and intense enough to provide an acceptable model for gamma-ray bursts, if these originate at cosmological distances.
Pfeiffer, Keram; French, Andrew S.
2015-01-01
Naturalistic signals were created from vibrations made by locusts walking on a Sansevieria plant. Both naturalistic and Gaussian noise signals were used to mechanically stimulate VS-3 slit-sense mechanoreceptor neurons of the spider, Cupiennius salei, with stimulus amplitudes adjusted to give similar firing rates for either stimulus. Intracellular microelectrodes recorded action potentials, receptor potential, and receptor current, using current clamp and voltage clamp. Frequency response analysis showed that naturalistic stimulation contained relatively more power at low frequencies, and caused increased neuronal sensitivity to higher frequencies. In contrast, varying the amplitude of Gaussian stimulation did not change neuronal dynamics. Naturalistic stimulation contained less entropy than Gaussian, but signal entropy was higher than stimulus in the resultant receptor current, indicating addition of uncorrelated noise during transduction. The presence of added noise was supported by measuring linear information capacity in the receptor current. Total entropy and information capacity in action potentials produced by either stimulus were much lower than in earlier stages, and limited to the maximum entropy of binary signals. We conclude that the dynamics of action potential encoding in VS-3 neurons are sensitive to the form of stimulation, but entropy and information capacity of action potentials are limited by firing rate. PMID:26578975
Klein, Lauren R; Money, Joel; Maharaj, Kaveesh; Robinson, Aaron; Lai, Tarissa; Driver, Brian E
2017-11-01
Assessing the likelihood of a variceal versus nonvariceal source of upper gastrointestinal bleeding (UGIB) guides therapy, but can be difficult to determine on clinical grounds. The objective of this study was to determine if there are easily ascertainable clinical and laboratory findings that can identify a patient as low risk for a variceal source of hemorrhage. This was a retrospective cohort study of adult ED patients with UGIB between January 2008 and December 2014 who had upper endoscopy performed during hospitalization. Clinical and laboratory data were abstracted from the medical record. The source of the UGIB was defined as variceal or nonvariceal based on endoscopic reports. Binary recursive partitioning was utilized to create a clinical decision rule. The rule was internally validated and test characteristics were calculated with 1,000 bootstrap replications. A total of 719 patients were identified; mean age was 55 years and 61% were male. There were 71 (10%) patients with a variceal UGIB identified on endoscopy. Binary recursive partitioning yielded a two-step decision rule (platelet count > 200 × 10 9 /L and an international normalized ratio [INR] < 1.3), which identified patients who were low risk for a variceal source of hemorrhage. For the bootstrapped samples, the rule performed with 97% sensitivity (95% confidence interval [CI] = 91%-100%) and 49% specificity (95% CI = 44%-53%). Although this derivation study must be externally validated before widespread use, patients presenting to the ED with an acute UGIB with platelet count of >200 × 10 9 /L and an INR of <1.3 may be at very low risk for a variceal source of their upper gastrointestinal hemorrhage. © 2017 by the Society for Academic Emergency Medicine.
Relations between Shannon entropy and genome order index in segmenting DNA sequences.
Zhang, Yi
2009-04-01
Shannon entropy H and genome order index S are used in segmenting DNA sequences. Zhang [Phys. Rev. E 72, 041917 (2005)] found that the two schemes are equivalent when a DNA sequence is converted to a binary sequence of S (strong H bond) and W (weak H bond). They left the mathematical proof to mathematicians who are interested in this issue. In this paper, a possible mathematical explanation is given. Moreover, we find that Chargaff parity rule 2 is the necessary condition of the equivalence, and the equivalence disappears when a DNA sequence is regarded as a four-symbol sequence. At last, we propose that S-2(-H) may be related to species evolution.
A novel asynchronous access method with binary interfaces
2008-01-01
Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797
Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin
2018-04-14
In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.
A New Quantum Gray-Scale Image Encoding Scheme
NASA Astrophysics Data System (ADS)
Naseri, Mosayeb; Abdolmaleky, Mona; Parandin, Fariborz; Fatahi, Negin; Farouk, Ahmed; Nazari, Reza
2018-02-01
In this paper, a new quantum images encoding scheme is proposed. The proposed scheme mainly consists of four different encoding algorithms. The idea behind of the scheme is a binary key generated randomly for each pixel of the original image. Afterwards, the employed encoding algorithm is selected corresponding to the qubit pair of the generated randomized binary key. The security analysis of the proposed scheme proved its enhancement through both randomization of the generated binary image key and altering the gray-scale value of the image pixels using the qubits of randomized binary key. The simulation of the proposed scheme assures that the final encoded image could not be recognized visually. Moreover, the histogram diagram of encoded image is flatter than the original one. The Shannon entropies of the final encoded images are significantly higher than the original one, which indicates that the attacker can not gain any information about the encoded images. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, IRAN
Equilibrium, stability, and orbital evolution of close binary systems
NASA Technical Reports Server (NTRS)
Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.
1994-01-01
We present a new analytic study of the equilibrium and stability properties of close binary systems containing polytropic components. Our method is based on the use of ellipsoidal trial functions in an energy variational principle. We consider both synchronized and nonsynchronized systems, constructing the compressible generalizations of the classical Darwin and Darwin-Riemann configurations. Our method can be applied to a wide variety of binary models where the stellar masses, radii, spins, entropies, and polytropic indices are all allowed to vary over wide ranges and independently for each component. We find that both secular and dynamical instabilities can develop before a Roche limit or contact is reached along a sequence of models with decreasing binary separation. High incompressibility always makes a given binary system more susceptible to these instabilities, but the dependence on the mass ratio is more complicated. As simple applications, we construct models of double degenerate systems and of low-mass main-sequence star binaries. We also discuss the orbital evoltuion of close binary systems under the combined influence of fluid viscosity and secular angular momentum losses from processes like gravitational radiation. We show that the existence of global fluid instabilities can have a profound effect on the terminal evolution of coalescing binaries. The validity of our analytic solutions is examined by means of detailed comparisons with the results of recent numerical fluid calculations in three dimensions.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Solid/liquid interfacial free energies in binary systems
NASA Technical Reports Server (NTRS)
Nason, D.; Tiller, W. A.
1973-01-01
Description of a semiquantitative technique for predicting the segregation characteristics of smooth interfaces between binary solid and liquid solutions in terms of readily available thermodynamic parameters of the bulk solutions. A lattice-liquid interfacial model and a pair-bonded regular solution model are employed in the treatment with an accommodation for liquid interfacial entropy. The method is used to calculate the interfacial segregation and the free energy of segregation for solid-liquid interfaces between binary solutions for the (111) boundary of fcc crystals. The zone of compositional transition across the interface is shown to be on the order of a few atomic layers in width, being moderately narrower for ideal solutions. The free energy of the segregated interface depends primarily upon the solid composition and the heats of fusion of the component atoms, the composition difference of the solutions, and the difference of the heats of mixing of the solutions.
NASA Astrophysics Data System (ADS)
Suvorova, S.; Clearwater, P.; Melatos, A.; Sun, L.; Moran, W.; Evans, R. J.
2017-11-01
A hidden Markov model (HMM) scheme for tracking continuous-wave gravitational radiation from neutron stars in low-mass x-ray binaries (LMXBs) with wandering spin is extended by introducing a frequency-domain matched filter, called the J -statistic, which sums the signal power in orbital sidebands coherently. The J -statistic is similar but not identical to the binary-modulated F -statistic computed by demodulation or resampling. By injecting synthetic LMXB signals into Gaussian noise characteristic of the Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO), it is shown that the J -statistic HMM tracker detects signals with characteristic wave strain h0≥2 ×10-26 in 370 d of data from two interferometers, divided into 37 coherent blocks of equal length. When applied to data from Stage I of the Scorpius X-1 Mock Data Challenge organized by the LIGO Scientific Collaboration, the tracker detects all 50 closed injections (h0≥6.84 ×10-26), recovering the frequency with a root-mean-square accuracy of ≤1.95 ×10-5 Hz . Of the 50 injections, 43 (with h0≥1.09 ×10-25) are detected in a single, coherent 10 d block of data. The tracker employs an efficient, recursive HMM solver based on the Viterbi algorithm, which requires ˜105 CPU-hours for a typical broadband (0.5 kHz) LMXB search.
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Sumiya, Yosuke; Nagahata, Yutaka; Komatsuzaki, Tamiki; Taketsugu, Tetsuya; Maeda, Satoshi
2015-12-03
The significance of kinetic analysis as a tool for understanding the reactivity and selectivity of organic reactions has recently been recognized. However, conventional simulation approaches that solve rate equations numerically are not amenable to multistep reaction profiles consisting of fast and slow elementary steps. Herein, we present an efficient and robust approach for evaluating the overall rate constants of multistep reactions via the recursive contraction of the rate equations to give the overall rate constants for the products and byproducts. This new method was applied to the Claisen rearrangement of allyl vinyl ether, as well as a substituted allyl vinyl ether. Notably, the profiles of these reactions contained 23 and 84 local minima, and 66 and 278 transition states, respectively. The overall rate constant for the Claisen rearrangement of allyl vinyl ether was consistent with the experimental value. The selectivity of the Claisen rearrangement reaction has also been assessed using a substituted allyl vinyl ether. The results of this study showed that the conformational entropy in these flexible chain molecules had a substantial impact on the overall rate constants. This new method could therefore be used to estimate the overall rate constants of various other organic reactions involving flexible molecules.
Honing Theory: A Complex Systems Framework for Creativity.
Gabora, Liane
2017-01-01
This paper proposes a theory of creativity, referred to as honing theory, which posits that creativity fuels the process by which culture evolves through communal exchange amongst minds that are self-organizing, self-maintaining, and self-reproducing. According to honing theory, minds, like other self-organizing systems, modify their contents and adapt to their environments to minimize entropy. Creativity begins with detection of high psychological entropy material, which provokes uncertainty and is arousal-inducing. The creative process involves recursively considering this material from new contexts until it is sufficiently restructured that arousal dissipates. Restructuring involves neural synchrony and dynamic binding, and may be facilitated by temporarily shifting to a more associative mode of thought. A creative work may similarly induce restructuring in others, and thereby contribute to the cultural evolution of more nuanced worldviews. Since lines of cultural descent connecting creative outputs may exhibit little continuity, it is proposed that cultural evolution occurs at the level of self-organizing minds; outputs reflect their evolutionary state. Honing theory addresses challenges not addressed by other theories of creativity, such as the factors that guide restructuring, and in what sense creative works evolve. Evidence comes from empirical studies, an agent-based computational model of cultural evolution, and a model of concept combination.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
NASA Astrophysics Data System (ADS)
Jeon, S.; Kang, D.-H.; Lee, Y. H.; Lee, S.; Lee, G. W.
2016-11-01
We investigate the relationship between the excess volume and undercoolability of Zr-Ti and Zr-Hf alloy liquids by using electrostatic levitation. Unlike in the case of Zr-Hf alloy liquids in which sizes of the constituent atoms are matched, a remarkable increase of undercoolability and negative excess volumes are observed in Zr-Ti alloy liquids as a function of their compositional ratios. In this work, size mismatch entropies for the liquids were obtained by calculating their hard sphere diameters, number densities, and packing fractions. We also show that the size mismatch entropy, which arises from the differences in atomic sizes of the constituent elements, plays an important role in determining the stabilities of metallic liquids.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
NASA Astrophysics Data System (ADS)
Huang, Haiping
2017-05-01
Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.
Jin, Ke; Zhang, Chuan; Zhang, Fan; ...
2018-03-07
To investigate the compositional effects on thermal-diffusion kinetics in concentrated solid-solution alloys, interdiffusion in seven diffusion couples with alloys from binary to quinary is systematically studied. The alloys with higher compositional complexity exhibit in general lower diffusion coefficients against homologous temperature, however, an exception is found that diffusion in NiCoFeCrPd is faster than in NiCoFeCr and NiCoCr. While the derived diffusion parameters suggest that diffusion in medium and high entropy alloys is overall more retarded than in pure metals and binary alloys, they strongly depend on specific constituents. The comparative features are captured by computational thermodynamics approaches using a self-consistentmore » database.« less
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Method for implementation of recursive hierarchical segmentation on parallel computers
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2005-01-01
A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.
Vicari, Giuseppe; Adenzato, Mauro
2014-05-01
In their 2002 seminal paper Hauser, Chomsky and Fitch hypothesize that recursion is the only human-specific and language-specific mechanism of the faculty of language. While debate focused primarily on the meaning of recursion in the hypothesis and on the human-specific and syntax-specific character of recursion, the present work focuses on the claim that recursion is language-specific. We argue that there are recursive structures in the domain of motor intentionality by way of extending John R. Searle's analysis of intentional action. We then discuss evidence from cognitive science and neuroscience supporting the claim that motor-intentional recursion is language-independent and suggest some explanatory hypotheses: (1) linguistic recursion is embodied in sensory-motor processing; (2) linguistic and motor-intentional recursions are distinct and mutually independent mechanisms. Finally, we propose some reflections about the epistemic status of HCF as presenting an empirically falsifiable hypothesis, and on the possibility of testing recursion in different cognitive domains. Copyright © 2014 Elsevier Inc. All rights reserved.
On the Number of Non-equivalent Ancestral Configurations for Matching Gene Trees and Species Trees.
Disanto, Filippo; Rosenberg, Noah A
2017-09-14
An ancestral configuration is one of the combinatorially distinct sets of gene lineages that, for a given gene tree, can reach a given node of a specified species tree. Ancestral configurations have appeared in recursive algebraic computations of the conditional probability that a gene tree topology is produced under the multispecies coalescent model for a given species tree. For matching gene trees and species trees, we study the number of ancestral configurations, considered up to an equivalence relation introduced by Wu (Evolution 66:763-775, 2012) to reduce the complexity of the recursive probability computation. We examine the largest number of non-equivalent ancestral configurations possible for a given tree size n. Whereas the smallest number of non-equivalent ancestral configurations increases polynomially with n, we show that the largest number increases with [Formula: see text], where k is a constant that satisfies [Formula: see text]. Under a uniform distribution on the set of binary labeled trees with a given size n, the mean number of non-equivalent ancestral configurations grows exponentially with n. The results refine an earlier analysis of the number of ancestral configurations considered without applying the equivalence relation, showing that use of the equivalence relation does not alter the exponential nature of the increase with tree size.
Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo
2018-01-01
We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.
On the Role of Entropy in the Protein Folding Process
NASA Astrophysics Data System (ADS)
Hoppe, Travis
2011-12-01
A protein's ultimate function and activity is determined by the unique three-dimensional structure taken by the folding process. Protein malfunction due to misfolding is the culprit of many clinical disorders, such as abnormal protein aggregations. This leads to neurodegenerative disorders like Huntington's and Alzheimer's disease. We focus on a subset of the folding problem, exploring the role and effects of entropy on the process of protein folding. Four major concepts and models are developed and each pertains to a specific aspect of the folding process: entropic forces, conformational states under crowding, aggregation, and macrostate kinetics from microstate trajectories. The exclusive focus on entropy is well-suited for crowding studies, as many interactions are nonspecific. We show how a stabilizing entropic force can arise purely from the motion of crowders in solution. In addition we are able to make a a quantitative prediction of the crowding effect with an implicit crowding approximation using an aspherical scaled-particle theory. In order to investigate the effects of aggregation, we derive a new operator expansion method to solve the Ising/Potts model with external fields over an arbitrary graph. Here the external fields are representative of the entropic forces. We show that this method reduces the problem of calculating the partition function to the solution of recursion relations. Many of the methods employed are coarse-grained approximations. As such, it is useful to have a viable method for extracting macrostate information from time series data. We develop a method to cluster the microstates into physically meaningful macrostates by grouping similar relaxation times from a transition matrix. Overall, the studied topics allow us to understand deeper the complicated process involving proteins.
NASA Astrophysics Data System (ADS)
Thompson, Todd A.; ud-Doula, Asif
2018-06-01
Although initially thought to be promising for production of the r-process nuclei, standard models of neutrino-heated winds from proto-neutron stars (PNSs) do not reach the requisite neutron-to-seed ratio for production of the lanthanides and actinides. However, the abundance distribution created by the r-, rp-, or νp-processes in PNS winds depends sensitively on the entropy and dynamical expansion time-scale of the flow, which may be strongly affected by high magnetic fields. Here, we present results from magnetohydrodynamic simulations of non-rotating neutrino-heated PNS winds with strong dipole magnetic fields from 1014 to 1016 G, and assess their role in altering the conditions for nucleosynthesis. The strong field forms a closed zone and helmet streamer configuration at the equator, with episodic dynamical mass ejections in toroidal plasmoids. We find dramatically enhanced entropy in these regions and conditions favourable for third-peak r-process nucleosynthesis if the wind is neutron-rich. If instead the wind is proton-rich, the conditions will affect the abundances from the νp-process. We quantify the distribution of ejected matter in entropy and dynamical expansion time-scale, and the critical magnetic field strength required to affect the entropy. For B ≳1015 G, we find that ≳10-6 M⊙ and up to ˜10-5 M⊙ of high-entropy material is ejected per highly magnetized neutron star birth in the wind phase, providing a mechanism for prompt heavy element enrichment of the universe. Former binary companions identified within (magnetar-hosting) supernova remnants, the remnants themselves, and runaway stars may exhibit overabundances. We provide a comparison with a semi-analytic model of plasmoid eruption and discuss implications and extensions.
NASA Technical Reports Server (NTRS)
Bentz, Daniel N.; Betush, William; Jackson, Kenneth A.
2003-01-01
In this paper we report on two related topics: Kinetic Monte Carlo simulations of the steady state growth of rod eutectics from the melt, and a study of the surface roughness of binary alloys. We have implemented a three dimensional kinetic Monte Carlo (kMC) simulation with diffusion by pair exchange only in the liquid phase. Entropies of fusion are first chosen to fit the surface roughness of the pure materials, and the bond energies are derived from the equilibrium phase diagram, by treating the solid and liquid as regular and ideal solutions respectively. A simple cubic lattice oriented in the {100} direction is used. Growth of the rods is initiated from columns of pure B material embedded in an A matrix, arranged in a close packed array with semi-periodic boundary conditions. The simulation cells typically have dimensions of 50 by 87 by 200 unit cells. Steady state growth is compliant with the Jackson-Hunt model. In the kMC simulations, using the spin-one Ising model, growth of each phase is faceted or nonfaceted phases depending on the entropy of fusion. There have been many studies of the surface roughening transition in single component systems, but none for binary alloy systems. The location of the surface roughening transition for the phases of a eutectic alloy determines whether the eutectic morphology will be regular or irregular. We have conducted a study of surface roughness on the spin-one Ising Model with diffusion using kMC. The surface roughness was found to scale with the melting temperature of the alloy as given by the liquidus line on the equilibrium phase diagram. The density of missing lateral bonds at the surface was used as a measure of surface roughness.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Spin-Orbit Torque from a Magnetic Heterostructure of High-Entropy Alloy
NASA Astrophysics Data System (ADS)
Chen, Tian-Yue; Chuang, Tsao-Chi; Huang, Ssu-Yen; Yen, Hung-Wei; Pai, Chi-Feng
2017-10-01
High-entropy alloy (HEA) is a family of metallic materials with nearly equal partitions of five or more metals, which might possess mechanical and transport properties that are different from conventional binary or tertiary alloys. In this work, we demonstrate current-induced spin-orbit torque (SOT) magnetization switching in a Ta-Nb-Hf-Zr-Ti HEA-based magnetic heterostructure with perpendicular magnetic anisotropy. The maximum dampinglike SOT efficiency from this particular HEA-based magnetic heterostructure is further determined to be |ζDLHEA | ≈0.033 by hysteresis-loop-shift measurements, while that for the Ta control sample is |ζDLTa | ≈0.04 . Our results indicate that HEA-based magnetic heterostructures can serve as an alternative group of potential candidates for SOT device applications due to the possibility of tuning buffer-layer properties with more than two constituent elements.
A classical model for closed-loop diagrams of binary liquid mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnitzler, J.v.; Prausnitz, J.M.
1994-03-01
A classical lattice model for closed-loop temperature-composition phase diagrams has been developed. It considers the effect of specific interactions, such as hydrogen bonding, between dissimilar components. This van Laar-type model includes a Flory-Huggins term for the excess entropy of mixing. It is applied to several liquid-liquid equilibria of nonelectrolytes, where the molecules of the two components differ in size. The model is able to represent the observed data semi-quantitatively, but in most cases it is not flexible enough to predict all parts of the closed loop quantitatively. The ability of the model to represent different binary systems is discussed. Finally,more » attention is given to a correction term, concerning the effect of concentration fluctuations near the upper critical solution temperature.« less
NASA Astrophysics Data System (ADS)
Jurčišinová, E.; Jurčišin, M.
2018-04-01
Anomalies of the specific heat capacity are investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the geometrically frustrated tetrahedron recursive lattice. It is shown that the Schottky-type anomaly in the behavior of the specific heat capacity is related to the existence of unique highly macroscopically degenerated single-point ground states which are formed on the borders between neighboring plateau-like ground states. It is also shown that the very existence of these single-point ground states with large residual entropies predicts the appearance of another anomaly in the behavior of the specific heat capacity for low temperatures, namely, the field-induced double-peak structure, which exists, and should be observed experimentally, along with the Schottky-type anomaly in various frustrated magnetic system.
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
Historical remarks on exponential product and quantum analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Masuo
2015-03-10
The exponential product formula [1, 2] was substantially introduced in physics by the present author [2]. Its systematic applications to quantum Monte Carlo Methods [3] were preformed [4, 5] first in 1977. Many interesting applications [6] of the quantum-classical correspondence (namely S-T transformation) have been reported. Systematic higher-order decomposition formulae were also discovered by the present author [7-11], using the recursion scheme [7, 9]. Physically speaking, these exponential product formulae play a conceptual role of separation of procedures [3,14]. Mathematical aspects of these formulae have been integrated in quantum analysis [15], in which non-commutative differential calculus is formulated and amore » general quantum Taylor expansion formula is given. This yields many useful operator expansion formulae such as the Feynman expansion formula and the resolvent expansion. Irreversibility and entropy production are also studied using quantum analysis [15].« less
Raghu, S; Sriraam, N; Kumar, G Pradeep
2017-02-01
Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.
Precipitation behavior of AlxCoCrFeNi high entropy alloys under ion irradiation
NASA Astrophysics Data System (ADS)
Yang, Tengfei; Xia, Songqin; Liu, Shi; Wang, Chenxu; Liu, Shaoshuai; Fang, Yuan; Zhang, Yong; Xue, Jianming; Yan, Sha; Wang, Yugang
2016-08-01
Materials performance is central to the satisfactory operation of current and future nuclear energy systems due to the severe irradiation environment in reactors. Searching for structural materials with excellent irradiation tolerance is crucial for developing the next generation nuclear reactors. Here, we report the irradiation responses of a novel multi-component alloy system, high entropy alloy (HEA) AlxCoCrFeNi (x = 0.1, 0.75 and 1.5), focusing on their precipitation behavior. It is found that the single phase system, Al0.1CoCrFeNi, exhibits a great phase stability against ion irradiation. No precipitate is observed even at the highest fluence. In contrast, numerous coherent precipitates are present in both multi-phase HEAs. Based on the irradiation-induced/enhanced precipitation theory, the excellent structural stability against precipitation of Al0.1CoCrFeNi is attributed to the high configurational entropy and low atomic diffusion, which reduces the thermodynamic driving force and kinetically restrains the formation of precipitate, respectively. For the multiphase HEAs, the phase separations and formation of ordered phases reduce the system configurational entropy, resulting in the similar precipitation behavior with corresponding binary or ternary conventional alloys. This study demonstrates the structural stability of single-phase HEAs under irradiation and provides important implications for searching for HEAs with higher irradiation tolerance.
Teaching and learning recursive programming: a review of the research literature
NASA Astrophysics Data System (ADS)
McCauley, Renée; Grissom, Scott; Fitzgerald, Sue; Murphy, Laurie
2015-01-01
Hundreds of articles have been published on the topics of teaching and learning recursion, yet fewer than 50 of them have published research results. This article surveys the computing education research literature and presents findings on challenges students encounter in learning recursion, mental models students develop as they learn recursion, and best practices in introducing recursion. Effective strategies for introducing the topic include using different contexts such as recurrence relations, programming examples, fractal images, and a description of how recursive methods are processed using a call stack. Several studies compared the efficacy of introducing iteration before recursion and vice versa. The paper concludes with suggestions for future research into how students learn and understand recursion, including a look at the possible impact of instructor attitude and newer pedagogies.
Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing
ERIC Educational Resources Information Center
Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel
2016-01-01
Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…
Three perspectives on complexity: entropy, compression, subsymmetry
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi
2017-12-01
There is no single universally accepted definition of `Complexity'. There are several perspectives on complexity and what constitutes complex behaviour or complex systems, as opposed to regular, predictable behaviour and simple systems. In this paper, we explore the following perspectives on complexity: effort-to-describe (Shannon entropy H, Lempel-Ziv complexity LZ), effort-to-compress (ETC complexity) and degree-of-order (Subsymmetry or SubSym). While Shannon entropy and LZ are very popular and widely used, ETC is relatively a new complexity measure. In this paper, we also propose a novel normalized complexity measure SubSym based on the existing idea of counting the number of subsymmetries or palindromes within a sequence. We compare the performance of these complexity measures on the following tasks: (A) characterizing complexity of short binary sequences of lengths 4 to 16, (B) distinguishing periodic and chaotic time series from 1D logistic map and 2D Hénon map, (C) analyzing the complexity of stochastic time series generated from 2-state Markov chains, and (D) distinguishing between tonic and irregular spiking patterns generated from the `Adaptive exponential integrate-and-fire' neuron model. Our study reveals that each perspective has its own advantages and uniqueness while also having an overlap with each other.
Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M
2007-07-01
Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.
A new strategy to design eutectic high-entropy alloys using simple mixture method
Jiang, Hui; Han, Kaiming; Gao, Xiaoxia; ...
2018-01-13
Eutectic high entropy alloys (EHEAs) hold promising industrial application potential, but how to design EHEA compositions remains challenging. In the present work, a simple and effective strategy by combining mixing enthalpy and constituent binary eutectic compositions was proposed to design EHEA compositions. This strategy was then applied to a series of (CoCrFeNi)M x (M = Nb, Ta, Zr, Hf) HEAs, leading to the discovery of new EHEAs, namely, CoCrFeNiNb 0.45, CoCrFeNiTa 0.4, CoCrFeNiZr 0.55 and CoCrFeNiHf 0.4. The microstructure of these new EHEAs comprised of FCC and Laves phases in the as-cast state. In conclusion, the experimental result shows thatmore » this new alloy design strategy can be used to locate new EHEAs effectively.« less
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
The More the Merrier?. Entropy and Statistics of Asexual Reproduction in Freshwater Planarians
NASA Astrophysics Data System (ADS)
Quinodoz, Sofia; Thomas, Michael A.; Dunkel, Jörn; Schötz, Eva-Maria
2011-04-01
The trade-off between traits in life-history strategies has been widely studied for sexual and parthenogenetic organisms, but relatively little is known about the reproduction strategies of asexual animals. Here, we investigate clonal reproduction in the freshwater planarian Schmidtea mediterranea, an important model organism for regeneration and stem cell research. We find that these flatworms adopt a randomized reproduction strategy that comprises both asymmetric binary fission and fragmentation (generation of multiple offspring during a reproduction cycle). Fragmentation in planarians has primarily been regarded as an abnormal behavior in the past; using a large-scale experimental approach, we now show that about one third of the reproduction events in S. mediterranea are fragmentations, implying that fragmentation is part of their normal reproductive behavior. Our analysis further suggests that certain characteristic aspects of the reproduction statistics can be explained in terms of a maximum relative entropy principle.
Wu, Tzi-Yi; Chen, Bor-Kuan; Hao, Lin; Peng, Yu-Chun; Sun, I-Wen
2011-01-01
A systematic study of the effect of composition on the thermo-physical properties of the binary mixtures of 1-methyl-3-pentyl imidazolium hexafluorophosphate [MPI][PF6] with poly(ethylene glycol) (PEG) [Mw = 400] is presented. The excess molar volume, refractive index deviation, viscosity deviation, and surface tension deviation values were calculated from these experimental density, ρ, refractive index, n, viscosity, η, and surface tension, γ, over the whole concentration range, respectively. The excess molar volumes are negative and continue to become increasingly negative with increasing temperature; whereas the viscosity and surface tension deviation are negative and become less negative with increasing temperature. The surface thermodynamic functions, such as surface entropy, enthalpy, as well as standard molar entropy, Parachor, and molar enthalpy of vaporization for pure ionic liquid, have been derived from the temperature dependence of the surface tension values. PMID:21731460
A new strategy to design eutectic high-entropy alloys using simple mixture method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hui; Han, Kaiming; Gao, Xiaoxia
Eutectic high entropy alloys (EHEAs) hold promising industrial application potential, but how to design EHEA compositions remains challenging. In the present work, a simple and effective strategy by combining mixing enthalpy and constituent binary eutectic compositions was proposed to design EHEA compositions. This strategy was then applied to a series of (CoCrFeNi)M x (M = Nb, Ta, Zr, Hf) HEAs, leading to the discovery of new EHEAs, namely, CoCrFeNiNb 0.45, CoCrFeNiTa 0.4, CoCrFeNiZr 0.55 and CoCrFeNiHf 0.4. The microstructure of these new EHEAs comprised of FCC and Laves phases in the as-cast state. In conclusion, the experimental result shows thatmore » this new alloy design strategy can be used to locate new EHEAs effectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong
Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less
Hyperspace geography: visualizing fitness landscapes beyond 4D.
Wiles, Janet; Tonkes, Bradley
2006-01-01
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.
2017-03-01
Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.
Liquid-glass transition in equilibrium
NASA Astrophysics Data System (ADS)
Parisi, G.; Seoane, B.
2014-02-01
We show in numerical simulations that a system of two coupled replicas of a binary mixture of hard spheres undergoes a phase transition in equilibrium at a density slightly smaller than the glass transition density for an unreplicated system. This result is in agreement with the theories that predict that such a transition is a precursor of the standard ideal glass transition. The critical properties are compatible with those of an Ising system. The relations of this approach to the conventional approach based on configurational entropy are briefly discussed.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Inverse design of multicomponent assemblies
NASA Astrophysics Data System (ADS)
Piñeros, William D.; Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.
2018-03-01
Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.
Solidification and microstructures of binary ice-I/hydrate eutectic aggregates
McCarthy, C.; Cooper, R.F.; Kirby, S.H.; Rieck, K.D.; Stern, L.A.
2007-01-01
The microstructures of two-phase binary aggregates of ice-I + salt-hydrate, prepared by eutectic solidification, have been characterized by cryogenic scanning electron microscopy (CSEM). The specific binary systems studied were H2O-Na2SO4, H2O-MgSO4, H2O-NaCl, and H2O-H2SO4; these were selected based on their potential application to the study of tectonics on the Jovian moon Europa. Homogeneous liquid solutions of eutectic compositions were undercooled modestly (??T - 1-5 ??C); similarly cooled crystalline seeds of the same composition were added to circumvent the thermodynamic barrier to nucleation and to control eutectic growth under (approximately) isothermal conditions. CSEM revealed classic eutectic solidification microstructures with the hydrate phase forming continuous lamellae, discontinuous lamellae, or forming the matrix around rods of ice-I, depending on the volume fractions of the phases and their entropy of dissolving and forming a homogeneous aqueous solution. We quantify aspects of the solidification behavior and microstructures for each system and, with these data articulate anticipated effects of the microstructure on the mechanical responses of the materials.
Phillips, Steven; Wilson, William H.
2012-01-01
Human cognitive capacity includes recursively definable concepts, which are prevalent in domains involving lists, numbers, and languages. Cognitive science currently lacks a satisfactory explanation for the systematic nature of such capacities (i.e., why the capacity for some recursive cognitive abilities–e.g., finding the smallest number in a list–implies the capacity for certain others–finding the largest number, given knowledge of number order). The category-theoretic constructs of initial F-algebra, catamorphism, and their duals, final coalgebra and anamorphism provide a formal, systematic treatment of recursion in computer science. Here, we use this formalism to explain the systematicity of recursive cognitive capacities without ad hoc assumptions (i.e., to the same explanatory standard used in our account of systematicity for non-recursive capacities). The presence of an initial algebra/final coalgebra explains systematicity because all recursive cognitive capacities, in the domain of interest, factor through (are composed of) the same component process. Moreover, this factorization is unique, hence no further (ad hoc) assumptions are required to establish the intrinsic connection between members of a group of systematically-related capacities. This formulation also provides a new perspective on the relationship between recursive cognitive capacities. In particular, the link between number and language does not depend on recursion, as such, but on the underlying functor on which the group of recursive capacities is based. Thus, many species (and infants) can employ recursive processes without having a full-blown capacity for number and language. PMID:22514704
What's special about human language? The contents of the "narrow language faculty" revisited.
Traxler, Matthew J; Boudewyn, Megan; Loudermilk, Jessica
2012-10-01
In this review we re-evaluate the recursion-only hypothesis, advocated by Fitch, Hauser and Chomsky (Hauser, Chomsky & Fitch, 2002; Fitch, Hauser & Chomsky, 2005). According to the recursion-only hypothesis, the property that distinguishes human language from animal communication systems is recursion, which refers to the potentially infinite embedding of one linguistic representation within another of the same type. This hypothesis predicts (1) that non-human primates and other animals lack the ability to learn recursive grammar, and (2) that recursive grammar is the sole cognitive mechanism that is unique to human language. We first review animal studies of recursive grammar, before turning to the claim that recursion is a property of all human languages. Finally, we discuss other views on what abilities may be unique to human language.
NASA Astrophysics Data System (ADS)
Rounaghi, G. H.; Dolatshahi, S.; Tarahomi, S.
2014-12-01
The stoichiometry, stability and the thermodynamic parameters of complex formation between cerium(III) cation and cryptand 222 (4,7,13,16,21,24-hexaoxa-1,10-diazabycyclo[8.8.8]-hexacosane) were studied by conductometric titration method in some binary solvent mixtures of dimethylformamide (DMF), 1,2-dichloroethane (DCE), ethyl acetate (EtOAc) and methyl acetate (MeOAc) with methanol (MeOH), at 288, 298, 308, and 318 K. A model based on 1: 1 stoichiometry has been used to analyze the conductivity data. The data have been fitted according to a non-linear least-squares analysis that provide the stability constant, K f, for the cation-ligand inclusion complex. The results revealed that the stability order of [Ce(cryptand 222)]3+ complex changes with the nature and composition of the solvent system. A non-linear relationship was observed between the stability constant (log K f) of [Ce(cryptand 222)]3+ complex versus the composition of the binary mixed solvent. Standard thermodynamic values were obtained from temperature dependence of the stability constant of the complex, show that the studied complexation process is mainly entropy governed and are influenced by the nature and composition of the binary mixed solvent solutions.
AlzhCPI: A knowledge base for predicting chemical-protein interactions towards Alzheimer's disease.
Fang, Jiansong; Wang, Ling; Li, Yecheng; Lian, Wenwen; Pang, Xiaocong; Wang, Hong; Yuan, Dongsheng; Wang, Qi; Liu, Ai-Lin; Du, Guan-Hua
2017-01-01
Alzheimer's disease (AD) is a complicated progressive neurodegeneration disorder. To confront AD, scientists are searching for multi-target-directed ligands (MTDLs) to delay disease progression. The in silico prediction of chemical-protein interactions (CPI) can accelerate target identification and drug discovery. Previously, we developed 100 binary classifiers to predict the CPI for 25 key targets against AD using the multi-target quantitative structure-activity relationship (mt-QSAR) method. In this investigation, we aimed to apply the mt-QSAR method to enlarge the model library to predict CPI towards AD. Another 104 binary classifiers were further constructed to predict the CPI for 26 preclinical AD targets based on the naive Bayesian (NB) and recursive partitioning (RP) algorithms. The internal 5-fold cross-validation and external test set validation were applied to evaluate the performance of the training sets and test set, respectively. The area under the receiver operating characteristic curve (ROC) for the test sets ranged from 0.629 to 1.0, with an average of 0.903. In addition, we developed a web server named AlzhCPI to integrate the comprehensive information of approximately 204 binary classifiers, which has potential applications in network pharmacology and drug repositioning. AlzhCPI is available online at http://rcidm.org/AlzhCPI/index.html. To illustrate the applicability of AlzhCPI, the developed system was employed for the systems pharmacology-based investigation of shichangpu against AD to enhance the understanding of the mechanisms of action of shichangpu from a holistic perspective.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
What's special about human language? The contents of the "narrow language faculty" revisited
Traxler, Matthew J.; Boudewyn, Megan; Loudermilk, Jessica
2012-01-01
In this review we re-evaluate the recursion-only hypothesis, advocated by Fitch, Hauser and Chomsky (Hauser, Chomsky & Fitch, 2002; Fitch, Hauser & Chomsky, 2005). According to the recursion-only hypothesis, the property that distinguishes human language from animal communication systems is recursion, which refers to the potentially infinite embedding of one linguistic representation within another of the same type. This hypothesis predicts (1) that non-human primates and other animals lack the ability to learn recursive grammar, and (2) that recursive grammar is the sole cognitive mechanism that is unique to human language. We first review animal studies of recursive grammar, before turning to the claim that recursion is a property of all human languages. Finally, we discuss other views on what abilities may be unique to human language. PMID:23105948
A Survey on Teaching and Learning Recursive Programming
ERIC Educational Resources Information Center
Rinderknecht, Christian
2014-01-01
We survey the literature about the teaching and learning of recursive programming. After a short history of the advent of recursion in programming languages and its adoption by programmers, we present curricular approaches to recursion, including a review of textbooks and some programming methodology, as well as the functional and imperative…
Okamoto, Norihiko L; Fujimoto, Shu; Kambara, Yuki; Kawamura, Marino; Chen, Zhenghao M T; Matsunoshita, Hirotaka; Tanaka, Katsushi; Inui, Haruyuki; George, Easo P
2016-10-24
High-entropy alloys (HEAs) comprise a novel class of scientifically and technologically interesting materials. Among these, equatomic CrMnFeCoNi with the face-centered cubic (FCC) structure is noteworthy because its ductility and strength increase with decreasing temperature while maintaining outstanding fracture toughness at cryogenic temperatures. Here we report for the first time by single-crystal micropillar compression that its bulk room temperature critical resolved shear stress (CRSS) is ~33-43 MPa, ~10 times higher than that of pure nickel. CRSS depends on pillar size with an inverse power-law scaling exponent of -0.63 independent of orientation. Planar ½ < 110 > {111} dislocations dissociate into Shockley partials whose separations range from ~3.5-4.5 nm near the screw orientation to ~5-8 nm near the edge, yielding a stacking fault energy of 30 ± 5 mJ/m 2 . Dislocations are smoothly curved without any preferred line orientation indicating no significant anisotropy in mobilities of edge and screw segments. The shear-modulus-normalized CRSS of the HEA is not exceptionally high compared to those of certain concentrated binary FCC solid solutions. Its rough magnitude calculated using the Fleischer/Labusch models corresponds to that of a hypothetical binary with the elastic constants of our HEA, solute concentrations of 20-50 at.%, and atomic size misfit of ~4%.
Okamoto, Norihiko L.; Fujimoto, Shu; Kambara, Yuki; Kawamura, Marino; Chen, Zhenghao M. T.; Matsunoshita, Hirotaka; Tanaka, Katsushi; Inui, Haruyuki; George, Easo P.
2016-01-01
High-entropy alloys (HEAs) comprise a novel class of scientifically and technologically interesting materials. Among these, equatomic CrMnFeCoNi with the face-centered cubic (FCC) structure is noteworthy because its ductility and strength increase with decreasing temperature while maintaining outstanding fracture toughness at cryogenic temperatures. Here we report for the first time by single-crystal micropillar compression that its bulk room temperature critical resolved shear stress (CRSS) is ~33–43 MPa, ~10 times higher than that of pure nickel. CRSS depends on pillar size with an inverse power-law scaling exponent of –0.63 independent of orientation. Planar ½ < 110 > {111} dislocations dissociate into Shockley partials whose separations range from ~3.5–4.5 nm near the screw orientation to ~5–8 nm near the edge, yielding a stacking fault energy of 30 ± 5 mJ/m2. Dislocations are smoothly curved without any preferred line orientation indicating no significant anisotropy in mobilities of edge and screw segments. The shear-modulus-normalized CRSS of the HEA is not exceptionally high compared to those of certain concentrated binary FCC solid solutions. Its rough magnitude calculated using the Fleischer/Labusch models corresponds to that of a hypothetical binary with the elastic constants of our HEA, solute concentrations of 20–50 at.%, and atomic size misfit of ~4%. PMID:27775026
Bánréti, Zoltán
2010-11-01
This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about them. The responses are supposed to involve formal structural recursion, but they contain semantic-pragmatic operations instead, with 'theory of mind' type embeddings. Aphasic individuals tend to exploit the parallel between 'theory of mind' embeddings and syntactic-structural embeddings in order to avoid formal structural recursion. Formal structural recursion may be more impaired in Broca's aphasia and semantic recursivity may remain selectively unimpaired in this type of aphasia.
Teaching and Learning Recursive Programming: A Review of the Research Literature
ERIC Educational Resources Information Center
McCauley, Renée; Grissom, Scott; Fitzgerald, Sue; Murphy, Laurie
2015-01-01
Hundreds of articles have been published on the topics of teaching and learning recursion, yet fewer than 50 of them have published research results. This article surveys the computing education research literature and presents findings on challenges students encounter in learning recursion, mental models students develop as they learn recursion,…
How Learning Logic Programming Affects Recursion Comprehension
ERIC Educational Resources Information Center
Haberman, Bruria
2004-01-01
Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural…
Recursive Objects--An Object Oriented Presentation of Recursion
ERIC Educational Resources Information Center
Sher, David B.
2004-01-01
Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…
How children perceive fractals: Hierarchical self-similarity and cognitive development
Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh
2014-01-01
The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884
Zen, E.-A.
1973-01-01
Reversed univariant hydrothermal phase-equilibrium reactions, in which a redox reaction occurs and is controlled by oxygen buffers, can be used to extract thermochemical data on minerals. The dominant gaseous species present, even for relatively oxidizing buffers such as the QFM buffer, are H2O and H2; the main problem is to calculate the chemical potentials of these components in a binary mixture. The mixing of these two species in the gas phase was assumed by Eugster and Wones (1962) to be ideal; this assumption allows calculation of the chemical potentials of the two components in a binary gas mixture, using data in the literature. A simple-mixture model of nonideal mixing, such as that proposed by Shaw (1967), can also be combined with the equations of state for oxygen buffers to permit derivation of the chemical potentials of the two components. The two mixing models yield closely comparable results for the more oxidizing buffers such as the QFM buffer. For reducing buffers such as IQF, the nonideal-mixing correction can be significant and the Shaw model is better. The procedure of calculation of mineralogical thermochemical data, in reactions where hydrogen and H2O simultaneously appear, is applied to the experimental data on annite, given by Wones et al. (1971), and on almandine, given by Hsu (1968). For annite the results are: Standard entropy of formation from the elements, Sf0 (298, 1)=-283.35??2.2 gb/gf, S0 (298, 1) =+92.5 gb/gf. Gf0 (298, 1)=-1148.2??6 kcal, and Hf0 (298, 1)=-1232.7??7 kcal. For almandine, the calculation takes into account the mutual solution of FeAl2O4 (Hc) in magnetite and of Fe3O4 (Mt) in hercynite and the temperature dependence of this solid solution, as given by Turnock and Eugster (1962); the calculations assume a regular-solution model for this binary spinel system. The standard entropy of formation of almandine, Sf,A0 (298, 1) is -272.33??3 gb/gf. The third law entropy, S0 (298, 1) is +68.3??3 gb/gf, a value much less than the oxide-sum estimate but the deviation is nearly the same as that of grossularite, referring to a comparable set of oxide standard states. The Gibbs free energy Gf,A0 (298, 1) is -1192.36??4 kcal, and the enthalpy Hf,A0 (298, 1) is -1273.56??5 kcal. ?? 1973 Springer-Verlag.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Nigl, Thomas P.; Smith, Nathan D.; Lichtenstein, Timothy; Gesualdi, Jarrod; Kumar, Kuldeep; Kim, Hojong
2017-01-01
A novel electrochemical cell based on a CaF2 solid-state electrolyte has been developed to measure the electromotive force (emf) of binary alkaline earth-liquid metal alloys as functions of both composition and temperature in order to acquire thermodynamic data. The cell consists of a chemically stable solid-state CaF2-AF2 electrolyte (where A is the alkaline-earth element such as Ca, Sr, or Ba), with binary A-B alloy (where B is the liquid metal such as Bi or Sb) working electrodes, and a pure A metal reference electrode. Emf data are collected over a temperature range of 723 K to 1,123 K in 25 K increments for multiple alloy compositions per experiment and the results are analyzed to yield activity values, phase transition temperatures, and partial molar entropies/enthalpies for each composition. PMID:29155770
Solubility of Naproxen in Polyethylene Glycol 200 + Water Mixtures at Various Temperatures
Panahi-Azar, Vahid; Soltanpour, Shahla; Martinez, Fleming; Jouyban, Abolghasem
2015-01-01
The solubility of naproxen in binary mixtures of polyethylene glycol 200 (PEG 200) + water at the temperature range from 298.0 K to 318.0 K were reported. The combinations of Jouyban-Acree model + van’t Hoff and Jouyban-Acree model + partial solubility parameters were used to predict the solubility of naproxen in PEG 200 + water mixtures at different temperatures. Combination of Jouyban-Acree model with van’t Hoff equation can be used to predict solubility in PEG 200 + water with only four solubility data in mono-solvents. The obtained solubility calculation errors vary from ~ 17 % up to 35 % depend on the number of required input data. Non-linear enthalpy-entropy compensation was found for naproxen in the investigated solvent system and the Jouyban−Acree model provides reasonably accurate mathematical descriptions of the thermodynamic data of naproxen in the investigated binary solvent systems. PMID:26664370
Hydration of nonelectrolytes in binary aqueous solutions
NASA Astrophysics Data System (ADS)
Rudakov, A. M.; Sergievskii, V. V.
2010-10-01
Literature data on the thermodynamic properties of binary aqueous solutions of nonelectrolytes that show negative deviations from Raoult's law due largely to the contribution of the hydration of the solute are briefly surveyed. Attention is focused on simulating the thermodynamic properties of solutions using equations of the cluster model. It is shown that the model is based on the assumption that there exists a distribution of stoichiometric hydrates over hydration numbers. In terms of the theory of ideal associated solutions, the equations for activity coefficients, osmotic coefficients, vapor pressure, and excess thermodynamic functions (volume, Gibbs energy, enthalpy, entropy) are obtained in analytical form. Basic parameters in the equations are the hydration numbers of the nonelectrolyte (the mathematical expectation of the distribution of hydrates) and the dispersions of the distribution. It is concluded that the model equations adequately describe the thermodynamic properties of a wide range of nonelectrolytes partly or completely soluble in water.
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-07-22
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Li, Baopu; Meng, Max Q-H
2012-05-01
Tumor in digestive tract is a common disease and wireless capsule endoscopy (WCE) is a relatively new technology to examine diseases for digestive tract especially for small intestine. This paper addresses the problem of automatic recognition of tumor for WCE images. Candidate color texture feature that integrates uniform local binary pattern and wavelet is proposed to characterize WCE images. The proposed features are invariant to illumination change and describe multiresolution characteristics of WCE images. Two feature selection approaches based on support vector machine, sequential forward floating selection and recursive feature elimination, are further employed to refine the proposed features for improving the detection accuracy. Extensive experiments validate that the proposed computer-aided diagnosis system achieves a promising tumor recognition accuracy of 92.4% in WCE images on our collected data.
Adaptive feature selection using v-shaped binary particle swarm optimization.
Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.
Adaptive feature selection using v-shaped binary particle swarm optimization
Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850
Beyond Atomic Sizes and Hume-Rothery Rules: Understanding and Predicting High-Entropy Alloys
Troparevsky, M. Claudia; Morris, James R.; Daene, Markus; ...
2015-09-03
High-entropy alloys constitute a new class of materials that provide an excellent combination of strength, ductility, thermal stability, and oxidation resistance. Although they have attracted extensive attention due to their potential applications, little is known about why these compounds are stable or how to predict which combination of elements will form a single phase. Here, we present a review of the latest research done on these alloys focusing on the theoretical models devised during the last decade. We discuss semiempirical methods based on the Hume-Rothery rules and stability criteria based on enthalpies of mixing and size mismatch. To provide insightsmore » into the electronic and magnetic properties of high-entropy alloys, we show the results of first-principles calculations of the electronic structure of the disordered solid-solution phase based on both Korringa Kohn Rostoker coherent potential approximation and large supercell models of example face-centered cubic and body-centered cubic systems. Furthermore, we discuss in detail a model based on enthalpy considerations that can predict which elemental combinations are most likely to form a single-phase high-entropy alloy. The enthalpies are evaluated via first-principles high-throughput density functional theory calculations of the energies of formation of binary compounds, and therefore it requires no experimental or empirically derived input. Finally, the model correctly accounts for the specific combinations of metallic elements that are known to form single-phase alloys while rejecting similar combinations that have been tried and shown not to be single phase.« less
Discrete-Time Quantum Walk with Phase Disorder: Localization and Entanglement Entropy.
Zeng, Meng; Yong, Ee Hou
2017-09-20
Quantum Walk (QW) has very different transport properties to its classical counterpart due to interference effects. Here we study the discrete-time quantum walk (DTQW) with on-site static/dynamic phase disorder following either binary or uniform distribution in both one and two dimensions. For one dimension, we consider the Hadamard coin; for two dimensions, we consider either a 2-level Hadamard coin (Hadamard walk) or a 4-level Grover coin (Grover walk) for the rotation in coin-space. We study the transport properties e.g. inverse participation ratio (IPR) and the standard deviation of the density function (σ) as well as the coin-position entanglement entropy (EE), due to the two types of phase disorders and the two types of coins. Our numerical simulations show that the dimensionality, the type of coins, and whether the disorder is static or dynamic play a pivotal role and lead to interesting behaviors of the DTQW. The distribution of the phase disorder has very minor effects on the quantum walk.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Mashing up metals with carbothermal shock
NASA Astrophysics Data System (ADS)
Skrabalak, Sara E.
2018-03-01
Different materials and the capabilities they enabled have marked the ages of civilization. For example, the malleable copper alloys of the Bronze Age provided harder and more durable tools. Most exploration of new alloys has focused on random alloys, in which the alloying metal sites have no metal preference. In binary and ternary metal systems, dissimilar elements do not mix readily at high concentrations, which has limited alloying studies to intermetallics (ordered multimetallic phases) and random alloys, in which minor components are added to a principal element. In 2004, crystalline metal alloys consisting of five or more principal elements in equal or nearly equal amounts (1, 2) were reported that were stabilized by their high configurational entropy. Unlike most random alloys, the “high-entropy” alloys (3, 4) reside in the centers of their multidimensional phase diagrams (see the figure, right). On page 1489 of this issue, Yao et al. (5) present an innovative and general route to high-entropy alloys that can mix up to eight elements into single-phase, size-controlled nanoparticles (NPs).
Experimental study of magnetocaloric effect in the two-level quantum system KTm(MoO4)2
NASA Astrophysics Data System (ADS)
Tarasenko, R.; Tkáč, V.; Orendáčová, A.; Orendáč, M.; Valenta, J.; Sechovský, V.; Feher, A.
2018-05-01
KTm(MoO4)2 belongs to the family of binary alkaline rare-earth molybdates. This compound can be considered to be an almost ideal quantum two-level system at low temperatures. Magnetocaloric properties of KTm(MoO4)2 single crystals were investigated using specific heat and magnetization measurement in the magnetic field applied along the easy axis. Large conventional magnetocaloric effect (-ΔSM ≈ 10.3 J/(kg K)) was observed in the magnetic field of 5 T in a relatively wide temperature interval. The isothermal magnetic entropy change of about 8 J/(kgK) has been achieved already for the magnetic field of 2 T. Temperature dependence of the isothermal entropy change under different magnetic fields is in good agreement with theoretical predictions for a quantum two-level system with Δ ≈ 2.82 cm-1. Investigation of magnetocaloric properties of KTm(MoO4)2 suggests that the studied system can be considered as a good material for magnetic cooling at low temperatures.
Thermodynamics of quasideterministic digital computers
NASA Astrophysics Data System (ADS)
Chu, Dominique
2018-02-01
A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.
Recursive Subsystems in Aphasia and Alzheimer's Disease: Case Studies in Syntax and Theory of Mind.
Bánréti, Zoltán; Hoffmann, Ildikó; Vincze, Veronika
2016-01-01
The relationship between recursive sentence embedding and theory-of-mind (ToM) inference is investigated in three persons with Broca's aphasia, two persons with Wernicke's aphasia, and six persons with mild and moderate Alzheimer's disease (AD). We asked questions of four types about photographs of various real-life situations. Type 4 questions asked participants about intentions, thoughts, or utterances of the characters in the pictures ("What may X be thinking/asking Y to do?"). The expected answers typically involved subordinate clauses introduced by conjunctions or direct quotations of the characters' utterances. Broca's aphasics did not produce answers with recursive sentence embedding. Rather, they projected themselves into the characters' mental states and gave direct answers in the first person singular, with relevant ToM content. We call such replies "situative statements." Where the question concerned the mental state of the character but did not require an answer with sentence embedding ("What does X hate?"), aphasics gave descriptive answers rather than situative statements. Most replies given by persons with AD to Type 4 questions were grammatical instances of recursive sentence embedding. They also gave a few situative statements but the ToM content of these was irrelevant. In more than one third of their well-formed sentence embeddings, too, they conveyed irrelevant ToM contents. Persons with moderate AD were unable to pass secondary false belief tests. The results reveal double dissociation: Broca's aphasics are unable to access recursive sentence embedding but they can make appropriate ToM inferences; moderate AD persons make the wrong ToM inferences but they are able to access recursive sentence embedding. The double dissociation may be relevant for the nature of the relationship between the two recursive capacities. Broca's aphasics compensated for the lack of recursive sentence embedding by recursive ToM reasoning represented in very simple syntactic forms: they used one recursive subsystem to stand in for another recursive subsystem.
Recursive Subsystems in Aphasia and Alzheimer's Disease: Case Studies in Syntax and Theory of Mind
Bánréti, Zoltán; Hoffmann, Ildikó; Vincze, Veronika
2016-01-01
The relationship between recursive sentence embedding and theory-of-mind (ToM) inference is investigated in three persons with Broca's aphasia, two persons with Wernicke's aphasia, and six persons with mild and moderate Alzheimer's disease (AD). We asked questions of four types about photographs of various real-life situations. Type 4 questions asked participants about intentions, thoughts, or utterances of the characters in the pictures (“What may X be thinking/asking Y to do?”). The expected answers typically involved subordinate clauses introduced by conjunctions or direct quotations of the characters' utterances. Broca's aphasics did not produce answers with recursive sentence embedding. Rather, they projected themselves into the characters' mental states and gave direct answers in the first person singular, with relevant ToM content. We call such replies “situative statements.” Where the question concerned the mental state of the character but did not require an answer with sentence embedding (“What does X hate?”), aphasics gave descriptive answers rather than situative statements. Most replies given by persons with AD to Type 4 questions were grammatical instances of recursive sentence embedding. They also gave a few situative statements but the ToM content of these was irrelevant. In more than one third of their well-formed sentence embeddings, too, they conveyed irrelevant ToM contents. Persons with moderate AD were unable to pass secondary false belief tests. The results reveal double dissociation: Broca's aphasics are unable to access recursive sentence embedding but they can make appropriate ToM inferences; moderate AD persons make the wrong ToM inferences but they are able to access recursive sentence embedding. The double dissociation may be relevant for the nature of the relationship between the two recursive capacities. Broca's aphasics compensated for the lack of recursive sentence embedding by recursive ToM reasoning represented in very simple syntactic forms: they used one recursive subsystem to stand in for another recursive subsystem. PMID:27064887
Taylor, Cooper A; Miller, Bill R; Shah, Soleil S; Parish, Carol A
2017-02-01
Mutations in the amyloid precursor protein (APP) are responsible for the formation of amyloid-β peptides. These peptides play a role in Alzheimer's and other dementia-related diseases. The cargo binding domain of the kinesin-1 light chain motor protein (KLC1) may be responsible for transporting APP either directly or via interaction with C-jun N-terminal kinase-interacting protein 1 (JIP1). However, to date there has been no direct experimental or computational assessment of such binding at the atomistic level. We used molecular dynamics and free energy estimations to gauge the affinity for the binary complexes of KLC1, APP, and JIP1. We find that all binary complexes (KLC1:APP, KLC1:JIP1, and APP:JIP1) contain conformations with favorable binding free energies. For KLC1:APP the inclusion of approximate entropies reduces the favorability. This is likely due to the flexibility of the 42-residue APP protein. In all cases we analyze atomistic/residue driving forces for favorable interactions. Proteins 2017; 85:221-234. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa
2017-03-01
Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was <247 mm 3 and root displacement was present, cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.
A GDP-driven model for the binary and weighted structure of the International Trade Network
NASA Astrophysics Data System (ADS)
Almog, Assaf; Squartini, Tiziano; Garlaschelli, Diego
2015-01-01
Recent events such as the global financial crisis have renewed the interest in the topic of economic networks. One of the main channels of shock propagation among countries is the International Trade Network (ITN). Two important models for the ITN structure, the classical gravity model of trade (more popular among economists) and the fitness model (more popular among networks scientists), are both limited to the characterization of only one representation of the ITN. The gravity model satisfactorily predicts the volume of trade between connected countries, but cannot reproduce the missing links (i.e. the topology). On the other hand, the fitness model can successfully replicate the topology of the ITN, but cannot predict the volumes. This paper tries to make an important step forward in the unification of those two frameworks, by proposing a new gross domestic product (GDP) driven model which can simultaneously reproduce the binary and the weighted properties of the ITN. Specifically, we adopt a maximum-entropy approach where both the degree and the strength of each node are preserved. We then identify strong nonlinear relationships between the GDP and the parameters of the model. This ultimately results in a weighted generalization of the fitness model of trade, where the GDP plays the role of a ‘macroeconomic fitness’ shaping the binary and the weighted structure of the ITN simultaneously. Our model mathematically explains an important asymmetry in the role of binary and weighted network properties, namely the fact that binary properties can be inferred without the knowledge of weighted ones, while the opposite is not true.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
Shaikh, Vasim R; Terdale, Santosh S; Ahamad, Abdul; Gupta, Gaurav R; Dagade, Dilip H; Hundiwale, Dilip G; Patil, Kesharsingh J
2013-12-19
The osmotic coefficient measurements for binary aqueous solutions of 2,2,2-cryptand (4,7,13,16,21,24-hexaoxa-1,10-diazabicyclo[8.8.8] hexacosane) in the concentration range of ~0.009 to ~0.24 mol·kg(-1) and in ternary aqueous solutions containing a fixed concentration of 2,2,2-cryptand of ~0.1 mol·kg(-1) with varying concentration of KBr (~0.06 to ~0.16 mol·kg(-1)) have been reported at 298.15 K. The diamine gets hydrolyzed in aqueous solutions and needs proper approach to obtain meaningful thermodynamic properties. The measured osmotic coefficient values are corrected for hydrolysis and are used to determine the solvent activity and mean ionic activity coefficients of solute as a function of concentration. Strong ion-pair formation is observed, and the ion-pair dissociation constant for the species [CrptH](+)[OH(-)] is reported. The excess and mixing thermodynamic properties (Gibbs free energy, enthalpy, and entropy changes) have been obtained using the activity data from this study and the heat data reported in the literature. Further, the data are utilized to compute the partial molal entropies of solvent and solute at finite as well as infinite dilution of 2,2,2-cryptand in water. The concentration dependent non-linear enthalpy-entropy compensation effect has been observed for the studied system, and the compensation temperature along with entropic parameter are reported. Using solute activity coefficient data in ternary solutions, the transfer Gibbs free energies for transfer of the cryptand from water to aqueous KBr as well as transfer of KBr from water to aqueous cryptand were obtained and utilized to obtain the salting constant (ks) and thermodynamic equilibrium constant (log K) values for the complex (2,2,2-cryptand:K(+)) at 298.15 K. The value of log K = 5.8 ± 0.1 obtained in this work is found to be in good agreement with that reported by Lehn and Sauvage. The standard molar entropy for complexation is also estimated for the 2,2,2-cryptand-KBr complex in aqueous medium.
Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L
2010-11-01
We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.
A Recursive Method for Calculating Certain Partition Functions.
ERIC Educational Resources Information Center
Woodrum, Luther; And Others
1978-01-01
Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)
Blankers, Matthijs; Frijns, Tom; Belackova, Vendula; Rossi, Carla; Svensson, Bengt; Trautmann, Franz; van Laar, Margriet
2014-01-01
Cannabis is Europe's most commonly used illicit drug. Some users do not develop dependence or other problems, whereas others do. Many factors are associated with the occurrence of cannabis-related disorders. This makes it difficult to identify key risk factors and markers to profile at-risk cannabis users using traditional hypothesis-driven approaches. Therefore, the use of a data-mining technique called binary recursive partitioning is demonstrated in this study by creating a classification tree to profile at-risk users. 59 variables on cannabis use and drug market experiences were extracted from an internet-based survey dataset collected in four European countries (Czech Republic, Italy, Netherlands and Sweden), n = 2617. These 59 potential predictors of problematic cannabis use were used to partition individual respondents into subgroups with low and high risk of having a cannabis use disorder, based on their responses on the Cannabis Abuse Screening Test. Both a generic model for the four countries combined and four country-specific models were constructed. Of the 59 variables included in the first analysis step, only three variables were required to construct a generic partitioning model to classify high risk cannabis users with 65-73% accuracy. Based on the generic model for the four countries combined, the highest risk for cannabis use disorder is seen in participants reporting a cannabis use on more than 200 days in the last 12 months. In comparison to the generic model, the country-specific models led to modest, non-significant improvements in classification accuracy, with an exception for Italy (p = 0.01). Using recursive partitioning, it is feasible to construct classification trees based on only a few variables with acceptable performance to classify cannabis users into groups with low or high risk of meeting criteria for cannabis use disorder. The number of cannabis use days in the last 12 months is the most relevant variable. The identified variables may be considered for use in future screeners for cannabis use disorders.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Recursion to food plants by free-ranging Bornean elephant
Gillespie, Graeme; Goossens, Benoit; Ismail, Sulaiman; Ancrenaz, Marc; Linklater, Wayne
2015-01-01
Plant recovery rates after herbivory are thought to be a key factor driving recursion by herbivores to sites and plants to optimise resource-use but have not been investigated as an explanation for recursion in large herbivores. We investigated the relationship between plant recovery and recursion by elephants (Elephas maximus borneensis) in the Lower Kinabatangan Wildlife Sanctuary, Sabah. We identified 182 recently eaten food plants, from 30 species, along 14 × 50 m transects and measured their recovery growth each month over nine months or until they were re-browsed by elephants. The monthly growth in leaf and branch or shoot length for each plant was used to calculate the time required (months) for each species to recover to its pre-eaten length. Elephant returned to all but two transects with 10 eaten plants, a further 26 plants died leaving 146 plants that could be re-eaten. Recursion occurred to 58% of all plants and 12 of the 30 species. Seventy-seven percent of the re-eaten plants were grasses. Recovery times to all plants varied from two to twenty months depending on the species. Recursion to all grasses coincided with plant recovery whereas recursion to most browsed plants occurred four to twelve months before they had recovered to their previous length. The small sample size of many browsed plants that received recursion and uneven plant species distribution across transects limits our ability to generalise for most browsed species but a prominent pattern in plant-scale recursion did emerge. Plant recovery time was a good predictor of time to recursion but varied as a function of growth form (grass, ginger, palm, liana and woody) and differences between sites. Time to plant recursion coincided with plant recovery time for the elephant’s preferred food, grasses, and perhaps also gingers, but not the other browsed species. Elephants are bulk feeders so it is likely that they time their returns to bulk feed on these grass species when quantities have recovered sufficiently to meet their intake requirements. The implications for habitat and elephant management are discussed. PMID:26290779
Recursion to food plants by free-ranging Bornean elephant.
English, Megan; Gillespie, Graeme; Goossens, Benoit; Ismail, Sulaiman; Ancrenaz, Marc; Linklater, Wayne
2015-01-01
Plant recovery rates after herbivory are thought to be a key factor driving recursion by herbivores to sites and plants to optimise resource-use but have not been investigated as an explanation for recursion in large herbivores. We investigated the relationship between plant recovery and recursion by elephants (Elephas maximus borneensis) in the Lower Kinabatangan Wildlife Sanctuary, Sabah. We identified 182 recently eaten food plants, from 30 species, along 14 × 50 m transects and measured their recovery growth each month over nine months or until they were re-browsed by elephants. The monthly growth in leaf and branch or shoot length for each plant was used to calculate the time required (months) for each species to recover to its pre-eaten length. Elephant returned to all but two transects with 10 eaten plants, a further 26 plants died leaving 146 plants that could be re-eaten. Recursion occurred to 58% of all plants and 12 of the 30 species. Seventy-seven percent of the re-eaten plants were grasses. Recovery times to all plants varied from two to twenty months depending on the species. Recursion to all grasses coincided with plant recovery whereas recursion to most browsed plants occurred four to twelve months before they had recovered to their previous length. The small sample size of many browsed plants that received recursion and uneven plant species distribution across transects limits our ability to generalise for most browsed species but a prominent pattern in plant-scale recursion did emerge. Plant recovery time was a good predictor of time to recursion but varied as a function of growth form (grass, ginger, palm, liana and woody) and differences between sites. Time to plant recursion coincided with plant recovery time for the elephant's preferred food, grasses, and perhaps also gingers, but not the other browsed species. Elephants are bulk feeders so it is likely that they time their returns to bulk feed on these grass species when quantities have recovered sufficiently to meet their intake requirements. The implications for habitat and elephant management are discussed.
Distinctive signatures of recursion.
Martins, Maurício Dias
2012-07-19
Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed.
The Recursive Paradigm: Suppose We Already Knew.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1995-01-01
Explains the recursive model in discrete mathematics through five examples and problems. Discusses the relationship between the recursive model, mathematical induction, and inductive reasoning and the relevance of these concepts in the school curriculum. Provides ideas for approaching this material with students. (Author/DDD)
Local Renyi entropic profiles of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2007-10-16
In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.
Local Renyi entropic profiles of DNA sequences
Vinga, Susana; Almeida, Jonas S
2007-01-01
Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
Theoretical study of the density of states and magnetic properties of LaCoO3
NASA Astrophysics Data System (ADS)
Zhuang, Min; Zhang, Weiyi; Hu, Cheng; Ming, Naiben
1998-05-01
The density of states and magnetic properties of low-spin, high-spin, and mixing states of LaCoO3 have been studied within the unrestricted Hartree-Fock approximation. The real-space recursion method is adopted for computing the electronic structure of the disordered system. The paramagnetic high-spin state is dealt with using the usual binary alloy coherent potential approximation (CPA); an extended trinary alloy CPA approximation is developed to describe the mixing state. In agreement with experiments, our results show that the main features of the quasiparticle spectra in the mixing state are not a sensitive function of the high-spin component, but the spectrum does get broadened due to spin scattering. The increasing of the high-spin component also results in a pileup of the density of states at the Fermi energy which indicates an insulator to metal phase transition. Some limitations of the present approach are also discussed.
NASA Technical Reports Server (NTRS)
Lee, C.
1975-01-01
Adopting the so-called genealogical construction, the eigenstates of collective operators can be expressed corresponding to a specified mode for an N-atom system in terms of those for an (N-1)-atom system. Matrix element of a collective operator of an arbitrary mode is presented which can be written as the product of an m-dependent factor and an m-independent reduced matrix element (RME). A set of recursion formulas for the RME was obtained. A graphical representation of the RME on the branching diagram for binary irreducible representations of permutation groups was then introduced. This gave a simple and systematic way of calculating the RME. Results show explicitly the geometry dependence of superradiance and the relative importance of r-conserving and r-nonconserving processes and clears up the chief difficulty encounted in the problem of N two-level atoms, spread over large regions, interacting with a multimode radiation field.
Khadilkar, Mihir R; Escobedo, Fernando A
2014-10-17
Sought-after ordered structures of mixtures of hard anisotropic nanoparticles can often be thermodynamically unfavorable due to the components' geometric incompatibility to densely pack into regular lattices. A simple compatibilization rule is identified wherein the particle sizes are chosen such that the order-disorder transition pressures of the pure components match (and the entropies of the ordered phases are similar). Using this rule with representative polyhedra from the truncated-cube family that form pure-component plastic crystals, Monte Carlo simulations show the formation of plastic-solid solutions for all compositions and for a wide range of volume fractions.
NASA Astrophysics Data System (ADS)
Suntsov, Yu. K.; Goryunov, V. A.; Chuikov, A. M.; Meshcheryakov, A. V.
2016-08-01
The boiling points of solutions of five binary systems are measured via ebulliometry in the pressure range of 2.05-103.3 kPa. Equilibrium vapor phase compositions, the values of the excess Gibbs energies, enthalpies, and entropies of solution of these systems are calculated. Patterns in the changes of phase equilibria and thermodynamic properties of solutions are established, depending on the compositions and temperatures of the systems. Liquid-vapor equilibria in the systems are described using the equations of Wilson and the NRTL (Non-Random Two-Liquid Model).
Talapin, Dmitri V
2008-06-01
Two papers in this issue report important developments in the field of inorganic nanomaterials. Chen and O'Brien discuss self-assembly of semiconductor nanocrystals into binary nanoparticle superlattices (BNSLs). They show that simple geometrical principles based on maximizing the packing density can determine BNSL symmetry in the absence of cohesive electrostatic interactions. This finding highlights the role of entropy as the driving force for ordering nanoparticles. The other paper, by Weller and co-workers, addresses an important problem related to device integration of nanoparticle assemblies. They employ the Langmuir-Blodgett technique to prepare long-range ordered monolayers of close-packed nanocrystals and transfer them to different substrates.
Towards rigorous analysis of the Levitov-Mirlin-Evers recursion
NASA Astrophysics Data System (ADS)
Fyodorov, Y. V.; Kupiainen, A.; Webb, C.
2016-12-01
This paper aims to develop a rigorous asymptotic analysis of an approximate renormalization group recursion for inverse participation ratios P q of critical powerlaw random band matrices. The recursion goes back to the work by Mirlin and Evers (2000 Phys. Rev. B 62 7920) and earlier works by Levitov (1990 Phys. Rev. Lett. 64 547, 1999 Ann. Phys. 8 697-706) and is aimed to describe the ensuing multifractality of the eigenvectors of such matrices. We point out both similarities and dissimilarities between the LME recursion and those appearing in the theory of multiplicative cascades and branching random walks and show that the methods developed in those fields can be adapted to the present case. In particular the LME recursion is shown to exhibit a phase transition, which we expect is a freezing transition, where the role of temperature is played by the exponent q. However, the LME recursion has features that make its rigorous analysis considerably harder and we point out several open problems for further study.
The language faculty that wasn't: a usage-based account of natural language recursion
Christiansen, Morten H.; Chater, Nick
2015-01-01
In the generative tradition, the language faculty has been shrinking—perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty. PMID:26379567
The language faculty that wasn't: a usage-based account of natural language recursion.
Christiansen, Morten H; Chater, Nick
2015-01-01
In the generative tradition, the language faculty has been shrinking-perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty.
Toward a Classical Thermodynamic Model for Retro-cognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, Edwin C.
2011-11-29
Retro-cognition--a human response before a randomly determined future stimulus--has always been part of our experience. Experiments over the last 80 years show a small but statistically significant effect. If this turns out to be true, then it suggests a form of macroscopic retro-causation. The 2nd Law of Thermodynamics provides an explanation for the apparent single direction of time at the macroscopic level although time is reversible at the microscopic level. In a preliminary study, I examined seven anomalous cognition (a.k.a., ESP) studies in which the entropic gradients and the entropy of their associated target systems were calculated, and the qualitymore » of the response was estimated by a rating system called the figure of merit. The combined Spearman's correlation coefficient for these variables for the seven studies was 0.211 (p = 6.4x10{sup -4}) with a 95% confidence interval for the correlation of [0.084, 0.332]; whereas, the same data for a correlation with the entropy itself was 0.028 (p = 0.36; 95% confidence interval of [-0.120-0.175]). This suggests that anomalous cognition is mediated via some kind of a sensory system in that all the normal sensory systems are more sensitive to changes than they are to inputs that are not changing. A standard relationship for the change of entropy of a binary sequence appears to provide an upper limit to anomalous cognition functioning for free response and for forced-choice Zener card guessing. This entropic relation and an apparent limit set by the entropy may provide a clue for understanding macroscopic retro-causation.« less
Qian, Jin; Shen, Mengmeng; Wang, Peifang; Wang, Chao; Li, Kun; Liu, Jingjing; Lu, Bianhe; Tian, Xin
2017-09-01
Powdered activated carbon (PAC), as an adsorbent, was applied to remove perfluorooctane sulfonate (PFOS) from aqueous solution. Laboratory batch experiments were performed to investigate the influences of phosphate (P) competition, temperature, and pH for PFOS adsorption onto PAC. The results showed that higher temperature favored PFOS adsorption in single and binary systems. The kinetic data fitted very well to the pseudo second-order kinetic model. Thermodynamically, the endothermic enthalpy of the PFOS adsorption in single and binary systems were 125.07 and 21.25 kJ mol -1 , respectively. The entropy of the PFOS adsorption in single and binary systems were 0.479 and 0.092 kJ mol -1 K -1 , respectively. And the Gibbs constants were negative. These results indicated that the adsorption processes were spontaneous. The adsorption isotherms of PFOS agreed well with the Langmuir model. In the single system, PFOS adsorption decreased with increased pH value. The difference in the amount of PFOS adsorption between the single and binary systems increased at higher pH. Frustrated total internal reflection (FTIR) demonstrated that P competition increased the hydrophilicity of the PAC and the electrostatic repulsion between PFOS and PAC, then the PFOS adsorption amount decreased. It also demonstrated that, at higher temperature, increased PFOS adsorption was mainly due to the higher diffusion rate of PFOS molecules and greater number of active sites opened on the PAC surface. Copyright © 2017 Elsevier Ltd. All rights reserved.
Valle, Annalisa; Massaro, Davide; Castelli, Ilaria; Marchetti, Antonella
2015-01-01
This study explores the development of theory of mind, operationalized as recursive thinking ability, from adolescence to early adulthood (N = 110; young adolescents = 47; adolescents = 43; young adults = 20). The construct of theory of mind has been operationalized in two different ways: as the ability to recognize the correct mental state of a character, and as the ability to attribute the correct mental state in order to predict the character’s behaviour. The Imposing Memory Task, with five recursive thinking levels, and a third-order false-belief task with three recursive thinking levels (devised for this study) have been used. The relationship among working memory, executive functions, and linguistic skills are also analysed. Results show that subjects exhibit less understanding of elevated recursive thinking levels (third, fourth, and fifth) compared to the first and second levels. Working memory is correlated with total recursive thinking, whereas performance on the linguistic comprehension task is related to third level recursive thinking in both theory of mind tasks. An effect of age on third-order false-belief task performance was also found. A key finding of the present study is that the third-order false-belief task shows significant age differences in the application of recursive thinking that involves the prediction of others’ behaviour. In contrast, such an age effect is not observed in the Imposing Memory Task. These results may support the extension of the investigation of the third order false belief after childhood. PMID:27247645
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kenta; Department of Chemistry, Biology, and Biotechnology, University of Perugia, 06123 Perugia; Gotoda, Hiroshi
2016-05-15
The convective motions within a solution of a photochromic spiro-oxazine being irradiated by UV only on the bottom part of its volume, give rise to aperiodic spectrophotometric dynamics. In this paper, we study three nonlinear properties of the aperiodic time series: permutation entropy, short-term predictability and long-term unpredictability, and degree distribution of the visibility graph networks. After ascertaining the extracted chaotic features, we show how the aperiodic time series can be exploited to implement all the fundamental two-inputs binary logic functions (AND, OR, NAND, NOR, XOR, and XNOR) and some basic arithmetic operations (half-adder, full-adder, half-subtractor). This is possible duemore » to the wide range of states a nonlinear system accesses in the course of its evolution. Therefore, the solution of the convective photochemical oscillator results in hardware for chaos-computing alternative to conventional complementary metal-oxide semiconductor-based integrated circuits.« less
NASA Astrophysics Data System (ADS)
Habibi, N.; Rounaghi, G. H.; Mohajeri, M.
2012-12-01
The complexation reaction of macrocyclic ligand (4'-nitrobenzo-15C5) with Y3+ cation was studied in acetonitrile-methanol (AN-MeOH), acetonitrile-ethanol (AN-EtOH), acetonitrile-dimethylformamide (AN-DMF) and ethylacetate-methanol (EtOAc-MeOH) binary mixtures at different temperatures using conductometry method. The conductivity data show that in all solvent systems, the stoichiometry of the complex formed between 4'-nitrobenzo-15C5 and Y3+ cation is 1: 1 (ML). The stability order of (4'-nitrobenzo-15C5). Y3+ complex in pure non-aqueous solvents at 25°C was found to be: EtOAc > EtOH > AN ≈ DMF > MeOH, and in the case of most compositions of the binary mixed solvents at 25°C it was: AN≈MeOH ≈ AN-EtOH > AN-DMF > EtOAc-MeOH. But the results indicate that the sequence of the stability of the complex in the binary mixed solutions changes with temperature. A non-linear behavior was observed for changes of log K f of (4'-nitrobenzo-15C5 · Y3+) complex versus the composition of the binary mixed solvents, which was explained in terms of solvent-solvent interactions and also the hetero-selective solvation of the species involved in the complexation reaction. The values of thermodynamic parameters (Δ H {c/ℴ} and Δ S {c/ℴ}) for formation of the complex were obtained from temperature dependent of the stability constant using the van't Hoff plots. The results represent that in most cases, the complex is both enthalpy and entropy stabilized and the values and also the sign of thermodynamic parameters are influenced by the nature and composition of the mixed solvents.
Recursive sequences in first-year calculus
NASA Astrophysics Data System (ADS)
Krainer, Thomas
2016-02-01
This article provides ready-to-use supplementary material on recursive sequences for a second-semester calculus class. It equips first-year calculus students with a basic methodical procedure based on which they can conduct a rigorous convergence or divergence analysis of many simple recursive sequences on their own without the need to invoke inductive arguments as is typically required in calculus textbooks. The sequences that are accessible to this kind of analysis are predominantly (eventually) monotonic, but also certain recursive sequences that alternate around their limit point as they converge can be considered.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
NASA Astrophysics Data System (ADS)
Bringuier, E.
2009-11-01
The paper analyses particle diffusion from a thermodynamic standpoint. The main goal of the paper is to highlight the conceptual connection between particle diffusion, which belongs to non-equilibrium statistical physics, and mechanics, which deals with particle motion, at the level of third-year university courses. We start out from the fact that, near equilibrium, particle transport should occur down the gradient of the chemical potential. This yields Fick's law with two additional advantages. First, splitting the chemical potential into 'mechanical' and 'chemical' contributions shows how transport and mechanics are linked through the diffusivity-mobility relationship. Second, splitting the chemical potential into entropic and energetic contributions discloses the respective roles of entropy maximization and energy minimization in driving diffusion. The paper addresses first unary diffusion, where there is only one mobile species in an immobile medium, and next turns to binary diffusion, where two species are mobile with respect to each other in a fluid medium. The interrelationship between unary and binary diffusivities is brought out and it is shown how binary diffusion reduces to unary diffusion in the limit of high dilution of one species amidst the other one. Self- and mutual diffusion are considered and contrasted within the thermodynamic framework; self-diffusion is a time-dependent manifestation of the Gibbs paradox of mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Huihui; Qu, ChenChen; Liu, Jing
Bacteria and phyllosilicate commonly coexist in the natural environment, producing various bacteria–clay complexes that are capable of immobilizing heavy metals, such as cadmium, via adsorption. However, the molecular binding mechanisms of heavy metals on these complex aggregates still remain poorly understood. This study investigated Cd adsorption on Gram-positive B. subtilis, Gram-negative P. putida and their binary mixtures with montmorillonite (Mont) using the Cd K-edge x-ray absorption spectroscopy (XAS) and isothermal titration calorimetry (ITC). We observed a lower adsorptive capacity for P. putida than B. subtilis, whereas P. putida–Mont and B. subtilis–Mont mixtures showed nearly identical Cd adsorption behaviors. EXAFS fitsmore » and ITC measurements demonstrated more phosphoryl binding of Cd in P. putida. The decreased coordination of C atoms around Cd and the reduced adsorption enthalpies and entropies for the binary mixtures compared to that for individual bacteria suggested that the bidentate Cd-carboxyl complexes in pure bacteria systems were probably transformed into monodentate complexes that acted as ionic bridging structure between bacteria and motmorillonite. This study clarified the binding mechanism of Cd at the bacteria–phyllosilicate interfaces from a molecular and thermodynamic view, which has an environmental significance for predicting the chemical behavior of trace elements in complex mineral–organic systems.« less
Solubility behavior of lamivudine crystal forms in recrystallization solvents.
Jozwiakowski, M J; Nguyen, N A; Sisco, J M; Spancake, C W
1996-02-01
Lamivudine can be obtained as acicular crystals (form I, 0.2 hydrate) from water or methanol and as bipyramidal crystals (form II, nonsolvated) from many nonaqueous solvents. Form II is thermodynamically favored in the solid state (higher melting point and greater density than form I) at ambient relative humidities. Solubility measurements on both forms versus solvent and temperature was used to determine whether entropy or enthalpy was the driving force for solubility. Solution calorimetry data indicated that form I is favored (less soluble) in all solvents studied on the basis of enthalpy alone. In higher alcohols and other organic solvents, form I has a larger entropy of solution than form II, which compensates for the enthalpic factors and results in physical stability for form II in these systems. The metastable crystal form solubility at 25 degrees C was estimated to be 1.2-2.3 times as high as the equilibrium solubility of the stable form, depending on the temperature, solvent, and crystal form. Binary solvent studies showed that > 18-20% water must be present in ethanol to convert the excess solid to form I at equilibrium.
Recursive Feature Extraction in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
Recursion, Language, and Starlings
ERIC Educational Resources Information Center
Corballis, Michael C.
2007-01-01
It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Inner and Outer Recursive Neural Networks for Chemoinformatics Applications.
Urban, Gregor; Subrahmanya, Niranjan; Baldi, Pierre
2018-02-26
Deep learning methods applied to problems in chemoinformatics often require the use of recursive neural networks to handle data with graphical structure and variable size. We present a useful classification of recursive neural network approaches into two classes, the inner and outer approach. The inner approach uses recursion inside the underlying graph, to essentially "crawl" the edges of the graph, while the outer approach uses recursion outside the underlying graph, to aggregate information over progressively longer distances in an orthogonal direction. We illustrate the inner and outer approaches on several examples. More importantly, we provide open-source implementations [available at www.github.com/Chemoinformatics/InnerOuterRNN and cdb.ics.uci.edu ] for both approaches in Tensorflow which can be used in combination with training data to produce efficient models for predicting the physical, chemical, and biological properties of small molecules.
NASA Astrophysics Data System (ADS)
Wen, Hongwei; Liu, Yue; Wang, Jieqiong; Zhang, Jishui; Peng, Yun; He, Huiguang
2016-03-01
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder characterized by the presence of multiple motor and vocal tics. Tic generation has been linked to disturbed networks of brain areas involved in planning, controlling and execution of action. The aim of our work is to select topological characteristics of structural network which were most efficient for estimating the classification models to identify early TS children. Here we employed the diffusion tensor imaging (DTI) and deterministic tractography to construct the structural networks of 44 TS children and 48 age and gender matched healthy children. We calculated four different connection matrices (fiber number, mean FA, averaged fiber length weighted and binary matrices) and then applied graph theoretical methods to extract the regional nodal characteristics of structural network. For each weighted or binary network, nodal degree, nodal efficiency and nodal betweenness were selected as features. Support Vector Machine Recursive Feature Extraction (SVM-RFE) algorithm was used to estimate the best feature subset for classification. The accuracy of 88.26% evaluated by a nested cross validation was achieved on combing best feature subset of each network characteristic. The identified discriminative brain nodes mostly located in the basal ganglia and frontal cortico-cortical networks involved in TS children which was associated with tic severity. Our study holds promise for early identification and predicting prognosis of TS children.
Recursive Deadbeat Controller Design
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1997-01-01
This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.
A basic recursion concept inventory
NASA Astrophysics Data System (ADS)
Hamouda, Sally; Edwards, Stephen H.; Elmongui, Hicham G.; Ernst, Jeremy V.; Shaffer, Clifford A.
2017-04-01
Recursion is both an important and a difficult topic for introductory Computer Science students. Students often develop misconceptions about the topic that need to be diagnosed and corrected. In this paper, we report on our initial attempts to develop a concept inventory that measures student misconceptions on basic recursion topics. We present a collection of misconceptions and difficulties encountered by students when learning introductory recursion as presented in a typical CS2 course. Based on this collection, a draft concept inventory in the form of a series of questions was developed and evaluated, with the question rubric tagged to the list of misconceptions and difficulties.
The Paradigm Recursion: Is It More Accessible When Introduced in Middle School?
ERIC Educational Resources Information Center
Gunion, Katherine; Milford, Todd; Stege, Ulrike
2009-01-01
Recursion is a programming paradigm as well as a problem solving strategy thought to be very challenging to grasp for university students. This article outlines a pilot study, which expands the age range of students exposed to the concept of recursion in computer science through instruction in a series of interesting and engaging activities. In…
ERIC Educational Resources Information Center
Lacave, Carmen; Molina, Ana I.; Redondo, Miguel A.
2018-01-01
Contribution: Findings are provided from an initial survey to evaluate the magnitude of the recursion problem from the student point of view. Background: A major difficulty that programming students must overcome--the learning of recursion--has been addressed by many authors, using various approaches, but none have considered how students perceive…
Using Spreadsheets to Help Students Think Recursively
ERIC Educational Resources Information Center
Webber, Robert P.
2012-01-01
Spreadsheets lend themselves naturally to recursive computations, since a formula can be defined as a function of one of more preceding cells. A hypothesized closed form for the "n"th term of a recursive sequence can be tested easily by using a spreadsheet to compute a large number of the terms. Similarly, a conjecture about the limit of a series…
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
ARTIFICIAL INTELLIGENCE , RECURSIVE FUNCTIONS), (*RECURSIVE FUNCTIONS, ARTIFICIAL INTELLIGENCE ), (*MATHEMATICAL LOGIC, ARTIFICIAL INTELLIGENCE ), METAMATHEMATICS, AUTOMATA, NUMBER THEORY, INFORMATION THEORY, COMBINATORIAL ANALYSIS
Watumull, Jeffrey; Hauser, Marc D; Roberts, Ian G; Hornstein, Norbert
2014-01-08
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language-the faculty of language in the narrow sense (FLN)-is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded-existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled-and potentially unbounded-expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert
2014-01-01
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID:24409164
Serial turbo trellis coded modulation using a serially concatenated coder
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)
2010-01-01
Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.
NASA Astrophysics Data System (ADS)
Papadimitriou, Constantinos; Donner, Reik V.; Stolbova, Veronika; Balasis, Georgios; Kurths, Jürgen
2015-04-01
Indian Summer monsoon is one of the most anticipated and important weather events with vast environmental, economical and social effects. Predictability of the Indian Summer Monsoon strength is crucial question for life and prosperity of the Indian population. In this study, we are attempting to uncover the relationship between the spatial complexity of Indian Summer Monsoon rainfall patterns, and the monsoon strength, in an effort to qualitatively determine how spatial organization of the rainfall patterns differs between strong and weak instances of the Indian Summer Monsoon. Here, we use observational satellite data from 1998 to 2012 from the Tropical Rainfall Measuring Mission (TRMM 3B42V7) and reanalysis gridded daily rainfall data for a time period of 57 years (1951-2007) (Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources, APHRODITE). In order to capture different aspects of the system's dynamics, first, we convert rainfall time series to binary symbolic sequences, exploring various thresholding criteria. Second, we apply the Shannon entropy formulation (in a block-entropy sense) using different measures of normalization of the resulting entropy values. Finally, we examine the effect of various large-scale climate modes such as El-Niño-Southern Oscillation, North Atlantic Oscillation, and Indian Ocean Dipole, on the emerging complexity patterns, and discuss the possibility for the utilization of such pattern maps in the forecasting of the spatial variability and strength of the Indian Summer Monsoon.
Negative to positive magnetoresistance and magnetocaloric effect in Pr 0.6Er 0.4Al 2
Pathak, Arjun K.; Gschneidner, Jr., K. A.; Pecharsky, V. K.
2014-10-13
We report on the magnetic, magnetocaloric and magnetotransport properties of Pr 0.6Er 0.4Al 2. The title compound exhibits a large positive magnetoresistance (MR) for H ≥ 40 kOe and a small but non negligible negative MR for H ≤ 30 kOe. The maximum positive MR reaches 13% at H = 80 kOe. The magnetic entropy and adiabatic temperature changes as functions of temperature each show two anomalies: a broad dome-like maximum below 20 K and a relatively sharp peak at higher temperature. As a result, observed behaviors are unique among other binary and mixed lanthanide compounds.
Holographic shell model: Stack data structure inside black holes?
NASA Astrophysics Data System (ADS)
Davidson, Aharon
2014-03-01
Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.
Paeng, Jin Chul; Keam, Bhumsuk; Kim, Tae Min; Kim, Dong-Wan; Heo, Dae Seog
2018-01-01
Intratumoral heterogeneity has been suggested to be an important resistance mechanism leading to treatment failure. We hypothesized that radiologic images could be an alternative method for identification of tumor heterogeneity. We tested heterogeneity textural parameters on pretreatment FDG-PET/CT in order to assess the predictive value of target therapy. Recurred or metastatic non-small cell lung cancer (NSCLC) subjects with an activating EGFR mutation treated with either gefitinib or erlotinib were reviewed. An exploratory data set (n = 161) and a validation data set (n = 21) were evaluated, and eight parameters were selected for survival analysis. The optimal cutoff value was determined by the recursive partitioning method, and the predictive value was calculated using Harrell’s C-index. Univariate analysis revealed that all eight parameters showed an increased hazard ratio (HR) for progression-free survival (PFS). The highest HR was 6.41 (P<0.01) with co-occurrence (Co) entropy. Increased risk remained present after adjusting for initial stage, performance status (PS), and metabolic volume (MV) (aHR: 4.86, P<0.01). Textural parameters were found to have an incremental predictive value of early EGFR tyrosine kinase inhibitor (TKI) failure compared to that of the base model of the stage and PS (C-index 0.596 vs. 0.662, P = 0.02, by Co entropy). Heterogeneity textural parameters acquired from pretreatment FDG-PET/CT are highly predictive factors for PFS of EGFR TKI in EGFR-mutated NSCLC patients. These parameters are easily applicable to the identification of a subpopulation at increased risk of early EGFR TKI failure. Correlation to genomic alteration should be determined in future studies. PMID:29385152
Language, Mind, Practice: Families of Recursive Thinking in Human Reasoning
ERIC Educational Resources Information Center
Josephson, Marika
2011-01-01
In 2002, Chomsky, Hauser, and Fitch asserted that recursion may be the one aspect of the human language faculty that makes human language unique in the narrow sense--unique to language and unique to human beings. They also argue somewhat more quietly (as do Pinker and Jackendoff 2005) that recursion may be possible outside of language: navigation,…
ERIC Educational Resources Information Center
Cai, Li
2013-01-01
Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…
Excess heat capacity and entropy of mixing along the chlorapatite-fluorapatite binary join
NASA Astrophysics Data System (ADS)
Dachs, Edgar; Harlov, Daniel; Benisek, Artur
2010-10-01
The heat capacity at constant pressure, C p, of chlorapatite [Ca5(PO4)3Cl - ClAp], and fluorapatite [Ca5(PO4)3F - FAp], as well as of 12 compositions along the chlorapatite-fluorapatite join have been measured using relaxation calorimetry [heat capacity option of the physical properties measurement system (PPMS)] and differential scanning calorimetry (DSC) in the temperature range 5-764 K. The chlor-fluorapatites were synthesized at 1,375-1,220°C from Ca3(PO4)2 using the CaF2-CaCl2 flux method. Most of the chlor-fluorapatite compositions could be measured directly as single crystals using the PPMS such that they were attached to the sample platform of the calorimeter by a crystal face. However, the crystals were too small for the crystal face to be polished. In such cases, where the sample coupling was not optimal, an empirical procedure was developed to smoothly connect the PPMS to the DSC heat capacities around ambient T. The heat capacity of the end-members above 298 K can be represented by the polynomials: C {p/ClAp} = 613.21 - 2,313.90 T -0.5 - 1.87964 × 107 T -2 + 2.79925 × 109 T -3 and C {p/FAp} = 681.24 - 4,621.73 × T -0.5 - 6.38134 × 106 T -2 + 7.38088 × 108 T -3 (units, J mol-1 K-1). Their standard third-law entropy, derived from the low-temperature heat capacity measurements, is S° = 400.6 ± 1.6 J mol-1 K-1 for chlorapatite and S° = 383.2 ± 1.5 J mol-1 K-1 for fluorapatite. Positive excess heat capacities of mixing, Δ C {p/ex}, occur in the chlorapatite-fluorapatite solid solution around 80 K (and to a lesser degree at 200 K) and are asymmetrically distributed over the join reaching a maximum of 1.3 ± 0.3 J mol-1 K-1 for F-rich compositions. They are significant at these conditions exceeding the 2 σ-uncertainty of the data. The excess entropy of mixing, Δ S ex, at 298 K reaches positive values of 3-4 J mol-1 K-1 in the F-rich portion of the binary, is, however, not significantly different from zero across the join within its 2 σ-uncertainty.
NASA Astrophysics Data System (ADS)
Poursina, Mohammad; Anderson, Kurt S.
2014-08-01
This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method.
NASA Astrophysics Data System (ADS)
Moine, Edouard; Privat, Romain; Sirjean, Baptiste; Jaubert, Jean-Noël
2017-09-01
The Gibbs energy of solvation measures the affinity of a solute for its solvent and is thus a key property for the selection of an appropriate solvent for a chemical synthesis or a separation process. More fundamentally, Gibbs energies of solvation are choice data for developing and benchmarking molecular models predicting solvation effects. The Comprehensive Solvation—CompSol—database was developed with the ambition to propose very large sets of new experimental solvation chemical-potential, solvation entropy, and solvation enthalpy data of pure and mixed components, covering extended temperature ranges. For mixed compounds, the solvation quantities were generated in infinite-dilution conditions by combining experimental values of pure-component and binary-mixture thermodynamic properties. Three types of binary-mixture properties were considered: partition coefficients, activity coefficients at infinite dilution, and Henry's-law constants. A rigorous methodology was implemented with the aim to select data at appropriate conditions of temperature, pressure, and concentration for the estimation of solvation data. Finally, our comprehensive CompSol database contains 21 671 data associated with 1969 pure species and 70 062 data associated with 14 102 binary mixtures (including 760 solvation data related to the ionic-liquid class of solvents). On the basis of the very large amount of experimental data contained in the CompSol database, it is finally discussed how solvation energies are influenced by hydrogen-bonding association effects.
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
Syntactic Recursion Facilitates and Working Memory Predicts Recursive Theory of Mind
Arslan, Burcu; Hohenberger, Annette; Verbrugge, Rineke
2017-01-01
In this study, we focus on the possible roles of second-order syntactic recursion and working memory in terms of simple and complex span tasks in the development of second-order false belief reasoning. We tested 89 Turkish children in two age groups, one younger (4;6–6;5 years) and one older (6;7–8;10 years). Although second-order syntactic recursion is significantly correlated with the second-order false belief task, results of ordinal logistic regressions revealed that the main predictor of second-order false belief reasoning is complex working memory span. Unlike simple working memory and second-order syntactic recursion tasks, the complex working memory task required processing information serially with additional reasoning demands that require complex working memory strategies. Based on our results, we propose that children’s second-order theory of mind develops when they have efficient reasoning rules to process embedded beliefs serially, thus overcoming a possible serial processing bottleneck. PMID:28072823
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Recursive formulas for determining perturbing accelerations in intermediate satellite motion
NASA Astrophysics Data System (ADS)
Stoianov, L.
Recursive formulas for Legendre polynomials and associated Legendre functions are used to obtain recursive relationships for determining acceleration components which perturb intermediate satellite motion. The formulas are applicable in all cases when the perturbation force function is presented as a series in spherical functions (gravitational, tidal, thermal, geomagnetic, and other perturbations of intermediate motion). These formulas can be used to determine the order of perturbing accelerations.
Application of recursive approaches to differential orbit correction of near Earth asteroids
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2016-10-01
Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.
NASA Technical Reports Server (NTRS)
Lee, C. T.
1975-01-01
Adopting the so-called genealogical construction, one can express the eigenstates of collective operators corresponding to a specified mode for an N-atom system in terms of those for an (N-1) atom system. Using these Dicke states as bases and using the Wigner-Eckart theorem, a matrix element of a collective operator of an arbitrary mode can be written as the product of an m-dependent factor and an m-independent reduced matrix element (RME). A set of recursion formulas for the RME is obtained. A graphical representation of the RME on the branching diagram for binary irreducible representations of permutation groups is then introduced. This gives a simple and systematic way of calculating the RME. This method is especially useful when the cooperation number r is close to N/2, where almost exact asymptotic expressions can be obtained easily. The result shows explicity the geometry dependence of superradiance and the relative importance of r-conserving and r-nonconserving processes.
Quantitative Tracking of Combinatorially Engineered Populations with Multiplexed Binary Assemblies.
Zeitoun, Ramsey I; Pines, Gur; Grau, Willliam C; Gill, Ryan T
2017-04-21
Advances in synthetic biology and genomics have enabled full-scale genome engineering efforts on laboratory time scales. However, the absence of sufficient approaches for mapping engineered genomes at system-wide scales onto performance has limited the adoption of more sophisticated algorithms for engineering complex biological systems. Here we report on the development and application of a robust approach to quantitatively map combinatorially engineered populations at scales up to several dozen target sites. This approach works by assembling genome engineered sites with cell-specific barcodes into a format compatible with high-throughput sequencing technologies. This approach, called barcoded-TRACE (bTRACE) was applied to assess E. coli populations engineered by recursive multiplex recombineering across both 6-target sites and 31-target sites. The 31-target library was then tracked throughout growth selections in the presence and absence of isopentenol (a potential next-generation biofuel). We also use the resolution of bTRACE to compare the influence of technical and biological noise on genome engineering efforts.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
Acoustical transmission-line model of the middle-ear cavities and mastoid air cells.
Keefe, Douglas H
2015-04-01
An acoustical transmission line model of the middle-ear cavities and mastoid air cell system (MACS) was constructed for the adult human middle ear with normal function. The air-filled cavities comprised the tympanic cavity, aditus, antrum, and MACS. A binary symmetrical airway branching model of the MACS was constructed using an optimization procedure to match the average total volume and surface area of human temporal bones. The acoustical input impedance of the MACS was calculated using a recursive procedure, and used to predict the input impedance of the middle-ear cavities at the location of the tympanic membrane. The model also calculated the ratio of the acoustical pressure in the antrum to the pressure in the middle-ear cavities at the location of the tympanic membrane. The predicted responses were sensitive to the magnitude of the viscothermal losses within the MACS. These predicted input impedance and pressure ratio functions explained the presence of multiple resonances reported in published data, which were not explained by existing MACS models.
NASA Astrophysics Data System (ADS)
van Westen, Thijs; Vlugt, Thijs J. H.; Gross, Joachim
2014-01-01
An analytical equation of state (EoS) is derived to describe the isotropic (I) and nematic (N) phase of linear- and partially flexible tangent hard-sphere chain fluids and their mixtures. The EoS is based on an extension of Onsager's second virial theory that was developed in our previous work [T. van Westen, B. Oyarzún, T. J. H. Vlugt, and J. Gross, J. Chem. Phys. 139, 034505 (2013)]. Higher virial coefficients are calculated using a Vega-Lago rescaling procedure, which is hereby generalized to mixtures. The EoS is used to study (1) the effect of length bidispersity on the I-N and N-N phase behavior of binary linear tangent hard-sphere chain fluid mixtures, (2) the effect of partial molecular flexibility on the binary phase diagram, and (3) the solubility of hard-sphere solutes in I- and N tangent hard-sphere chain fluids. By changing the length bidispersity, two types of phase diagrams were found. The first type is characterized by an I-N region at low pressure and a N-N demixed region at higher pressure that starts from an I-N-N triphase equilibrium. The second type does not show the I-N-N equilibrium. Instead, the N-N region starts from a lower critical point at a pressure above the I-N region. The results for the I-N region are in excellent agreement with the results from molecular simulations. It is shown that the N-N demixing is driven both by orientational and configurational/excluded volume entropy. By making the chains partially flexible, it is shown that the driving force resulting from the configurational entropy is reduced (due to a less anisotropic pair-excluded volume), resulting in a shift of the N-N demixed region to higher pressure. Compared to linear chains, no topological differences in the phase diagram were found. We show that the solubility of hard-sphere solutes decreases across the I-N phase transition. Furthermore, it is shown that by using a liquid crystal mixture as the solvent, the solubility difference can by maximized by tuning the composition. Theoretical results for the Henry's law constant of the hard-sphere solute are in good agreement with the results from molecular simulation.
Recursive Fact-finding: A Streaming Approach to Truth Estimation in Crowdsourcing Applications
2013-07-01
are reported over the course of the campaign, lending themselves better to the abstraction of a data stream arriving from the community of sources. In...EM Recursive EM Figure 4. Recursive EM Algorithm Convergence V. RELATED WORK Social sensing which is also referred to as human- centric sensing [4...systems, where different sources offer reviews on products (or brands, companies) they have experienced [16]. Customers are affected by those reviews
Recursive computation of mutual potential between two polyhedra
NASA Astrophysics Data System (ADS)
Hirabayashi, Masatoshi; Scheeres, Daniel J.
2013-11-01
Recursive computation of mutual potential, force, and torque between two polyhedra is studied. Based on formulations by Werner and Scheeres (Celest Mech Dyn Astron 91:337-349, 2005) and Fahnestock and Scheeres (Celest Mech Dyn Astron 96:317-339, 2006) who applied the Legendre polynomial expansion to gravity interactions and expressed each order term by a shape-dependent part and a shape-independent part, this paper generalizes the computation of each order term, giving recursive relations of the shape-dependent part. To consider the potential, force, and torque, we introduce three tensors. This method is applicable to any multi-body systems. Finally, we implement this recursive computation to simulate the dynamics of a two rigid-body system that consists of two equal-sized parallelepipeds.
The Structure of Scientific Evolution
2013-01-01
Science is the construction and testing of systems that bind symbols to sensations according to rules. Material implication is the primary rule, providing the structure of definition, elaboration, delimitation, prediction, explanation, and control. The goal of science is not to secure truth, which is a binary function of accuracy, but rather to increase the information about data communicated by theory. This process is symmetric and thus entails an increase in the information about theory communicated by data. Important components in this communication are the elevation of data to the status of facts, the descent of models under the guidance of theory, and their close alignment through the evolving retroductive process. The information mutual to theory and data may be measured as the reduction in the entropy, or complexity, of the field of data given the model. It may also be measured as the reduction in the entropy of the field of models given the data. This symmetry explains the important status of parsimony (how thoroughly the data exploit what the model can say) alongside accuracy (how thoroughly the model represents what can be said about the data). Mutual information is increased by increasing model accuracy and parsimony, and by enlarging and refining the data field under purview. PMID:28018043
Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang
2018-06-14
Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.
Liu, Huiling; Xia, Bingbing; Yi, Dehui
2016-01-01
We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Thermodynamic compensation upon binding to exosite 1 and the active site of thrombin.
Treuheit, Nicholas A; Beach, Muneera A; Komives, Elizabeth A
2011-05-31
Several lines of experimental evidence including amide exchange and NMR suggest that ligands binding to thrombin cause reduced backbone dynamics. Binding of the covalent inhibitor dPhe-Pro-Arg chloromethyl ketone to the active site serine, as well as noncovalent binding of a fragment of the regulatory protein, thrombomodulin, to exosite 1 on the back side of the thrombin molecule both cause reduced dynamics. However, the reduced dynamics do not appear to be accompanied by significant conformational changes. In addition, binding of ligands to the active site does not change the affinity of thrombomodulin fragments binding to exosite 1; however, the thermodynamic coupling between exosite 1 and the active site has not been fully explored. We present isothermal titration calorimetry experiments that probe changes in enthalpy and entropy upon formation of binary ligand complexes. The approach relies on stringent thrombin preparation methods and on the use of dansyl-l-arginine-(3-methyl-1,5-pantanediyl)amide and a DNA aptamer as ligands with ideal thermodynamic signatures for binding to the active site and to exosite 1. Using this approach, the binding thermodynamic signatures of each ligand alone as well as the binding signatures of each ligand when the other binding site was occupied were measured. Different exosite 1 ligands with widely varied thermodynamic signatures cause a similar reduction in ΔH and a concomitantly lower entropy cost upon DAPA binding at the active site. The results suggest a general phenomenon of enthalpy-entropy compensation consistent with reduction of dynamics/increased folding of thrombin upon ligand binding to either the active site or exosite 1.
NASA Astrophysics Data System (ADS)
Setlur Nagesh, S. V.; Khobragade, P.; Ionita, C.; Bednarek, D. R.; Rudin, S.
2015-03-01
Because x-ray based image-guided vascular interventions are minimally invasive they are currently the most preferred method of treating disorders such as stroke, arterial stenosis, and aneurysms; however, the x-ray exposure to the patient during long image-guided interventional procedures could cause harmful effects such as cancer in the long run and even tissue damage in the short term. ROI fluoroscopy reduces patient dose by differentially attenuating the incident x-rays outside the region-of-interest. To reduce the noise in the dose-reduced regions previously recursive temporal filtering was successfully demonstrated for neurovascular interventions. However, in cardiac interventions, anatomical motion is significant and excessive recursive filtering could cause blur. In this work the effects of three noise-reduction schemes, including recursive temporal filtering, spatial mean filtering, and a combination of spatial and recursive temporal filtering, were investigated in a simulated ROI dose-reduced cardiac intervention. First a model to simulate the aortic arch and its movement was built. A coronary stent was used to simulate a bioprosthetic valve used in TAVR procedures and was deployed under dose-reduced ROI fluoroscopy during the simulated heart motion. The images were then retrospectively processed for noise reduction in the periphery, using recursive temporal filtering, spatial filtering and a combination of both. Quantitative metrics for all three noise reduction schemes are calculated and are presented as results. From these it can be concluded that with significant anatomical motion, a combination of spatial and recursive temporal filtering scheme is best suited for reducing the excess quantum noise in the periphery. This new noise-reduction technique in combination with ROI fluoroscopy has the potential for substantial patient-dose savings in cardiac interventions.
Methods for assessing movement path recursion with application to African buffalo in South Africa
Bar-David, S.; Bar-David, I.; Cross, P.C.; Ryan, S.J.; Knechtel, C.U.; Getz, W.M.
2009-01-01
Recent developments of automated methods for monitoring animal movement, e.g., global positioning systems (GPS) technology, yield high-resolution spatiotemporal data. To gain insights into the processes creating movement patterns, we present two new techniques for extracting information from these data on repeated visits to a particular site or patch ("recursions"). Identification of such patches and quantification of recursion pathways, when combined with patch-related ecological data, should contribute to our understanding of the habitat requirements of large herbivores, of factors governing their space-use patterns, and their interactions with the ecosystem. We begin by presenting output from a simple spatial model that simulates movements of large-herbivore groups based on minimal parameters: resource availability and rates of resource recovery after a local depletion. We then present the details of our new techniques of analyses (recursion analysis and circle analysis) and apply them to data generated by our model, as well as two sets of empirical data on movements of African buffalo (Syncerus coffer): the first collected in Klaserie Private Nature Reserve and the second in Kruger National Park, South Africa. Our recursion analyses of model outputs provide us with a basis for inferring aspects of the processes governing the production of buffalo recursion patterns, particularly the potential influence of resource recovery rate. Although the focus of our simulations was a comparison of movement patterns produced by different resource recovery rates, we conclude our paper with a comprehensive discussion of how recursion analyses can be used when appropriate ecological data are available to elucidate various factors influencing movement. Inter alia, these include the various limiting and preferred resources, parasites, and topographical and landscape factors. ?? 2009 by the Ecological Society of America.
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Simple recursion relations for general field theories
Cheung, Clifford; Shen, Chia -Hsien; Trnka, Jaroslav
2015-06-17
On-shell methods offer an alternative definition of quantum field theory at tree-level, replacing Feynman diagrams with recursion relations and interaction vertices with a handful of seed scattering amplitudes. In this paper we determine the simplest recursion relations needed to construct a general four-dimensional quantum field theory of massless particles. For this purpose we define a covering space of recursion relations which naturally generalizes all existing constructions, including those of BCFW and Risager. The validity of each recursion relation hinges on the large momentum behavior of an n-point scattering amplitude under an m-line momentum shift, which we determine solely from dimensionalmore » analysis, Lorentz invariance, and locality. We show that all amplitudes in a renormalizable theory are 5-line constructible. Amplitudes are 3-line constructible if an external particle carries spin or if the scalars in the theory carry equal charge under a global or gauge symmetry. Remarkably, this implies the 3-line constructibility of all gauge theories with fermions and complex scalars in arbitrary representations, all supersymmetric theories, and the standard model. Moreover, all amplitudes in non-renormalizable theories without derivative interactions are constructible; with derivative interactions, a subset of amplitudes is constructible. We illustrate our results with examples from both renormalizable and non-renormalizable theories. In conclusion, our study demonstrates both the power and limitations of recursion relations as a self-contained formulation of quantum field theory.« less
Recursive Implementations of the Consider Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; DSouza, Chris
2012-01-01
One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.
Recursive computer architecture for VLSI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treleaven, P.C.; Hopkins, R.P.
1982-01-01
A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.
NASA Technical Reports Server (NTRS)
Metzger, Philip T.
2006-01-01
Ergodicity is proved for granular contact forces. To obtain this proof from first principles, this paper generalizes Boltzmann's stosszahlansatz (molecular chaos) so that it maintains the necessary correlations and symmetries of granular packing ensembles. Then it formally counts granular contact force states and thereby defines the proper analog of Boltzmann's H functional. This functional is used to prove that (essentially) all static granular packings must exist at maximum entropy with respect to their contact forces. Therefore, the propagation of granular contact forces through a packing is a truly ergodic process in the Boltzmannian sense, or better, it is self-ergodic. Self-ergodicity refers to the non-dynamic, internal relationships that exist between the layer-by-layer and column-by-column subspaces contained within the phase space locus of any particular granular packing microstate. The generalized H Theorem also produces a recursion equation that may be solved numerically to obtain the density of single particle states and hence the distribution of granular contact forces corresponding to the condition of self-ergodicity. The predictions of the theory are overwhelmingly validated by comparison to empirical data from discrete element modeling.
High-Precision Tests of Stochastic Thermodynamics in a Feedback Trap
NASA Astrophysics Data System (ADS)
Gavrilov, Momčilo; Jun, Yonggun; Bechhoefer, John
2015-03-01
Feedback traps can trap and manipulate small particles and molecules in solution. They have been applied to the measurement of physical and chemical properties of particles and to explore fundamental questions in the non-equilibrium statistical mechanics of small systems. Feedback traps allow one to choose an arbitrary virtual potential, do any time-dependent transformation of the potential, and measure various thermodynamic quantities such as stochastic work, heat, or entropy. In feedback-trap experiments, the dynamics of a trapped object is determined by the imposed potential but is also affected by drifts due to electrochemical reactions and by temperature variations in the electronic amplifier. Although such drifts are small for measurements on the order of seconds, they dominate on time scales of minutes or slower. In this talk, we present a recursive algorithm that allows real-time estimations of drifts and other particle properties. These estimates let us do a real-time calibration of the feedback trap. Having eliminated systematic errors, we were able to show that erasing a one-bit memory requires at least kT ln 2 of work, in accordance with Landauer's principle. This work was supported by NSERC (Canada).
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Soft context clustering for F0 modeling in HMM-based speech synthesis
NASA Astrophysics Data System (ADS)
Khorram, Soheil; Sameti, Hossein; King, Simon
2015-12-01
This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.
NASA Astrophysics Data System (ADS)
Bhatnagar, Promod K.; Gupta, Poonam; Singh, Laxman
2001-06-01
Chalcogenide based alloys find applications in a number of devices like optical memories, IR detectors, optical switches, photovoltaics, compound semiconductor heterosrtuctures etc. We have modified the Gurman's statistical thermodynamic model (STM) of binary covalent alloys. In the Gurman's model, entropy calculations are based on the number of structural units present. The need to modify this model arose due to the fact that it gives equal probability for all the tetrahedra present in the alloy. We have modified the Gurman's model by introducing the concept that the entropy is based on the bond arrangement rather than that on the structural units present. In the present work calculation based on this modification have been presented for optical properties, which find application in optical switching/memories, solar cells and other optical devices. It has been shown that the calculated optical parameters (for a typical case of GaxSe1-x) based on modified model are closer to the available experimental results. These parameters include refractive index, extinction coefficient, dielectric functions, optical band gap etc. GaxSe1-x has been found to be suitable for reversible optical memories also, where phase change (a yields c and vice versa) takes place at specified physical conditions. DTA/DSC studies also suggest the suitability of this material for optical switching/memory applications. We have also suggested possible use of GaxSe1-x (x = 0.4) in place of oxide layer in a Metal - Oxide - Semiconductor type solar cells. The new structure is Metal - Ga2Se3 - GaAs. The I-V characteristics and other parameters calculated for this structure are found to be much better than that for Si based solar cells. Maximum output power is obtained at the intermediate layer thickness approximately 40 angstroms for this typical solar cell.
A Synthetic Recursive “+1” Pathway for Carbon Chain Elongation
Marcheschi, Ryan J.; Li, Han; Zhang, Kechun; Noey, Elizabeth L.; Kim, Seonah; Chaubey, Asha; Houk, K. N.; Liao, James C.
2013-01-01
Nature uses four methods of carbon chain elongation for the production of 2-ketoacids, fatty acids, polyketides, and isoprenoids. Using a combination of quantum mechanical (QM) modeling, protein–substrate modeling, and protein and metabolic engineering, we have engineered the enzymes involved in leucine biosynthesis for use as a synthetic “+1” recursive metabolic pathway to extend the carbon chain of 2-ketoacids. This modified pathway preferentially selects longer-chain substrates for catalysis, as compared to the non-recursive natural pathway, and can recursively catalyze five elongation cycles to synthesize bulk chemicals, such as 1-heptanol, 1-octanol, and phenylpropanol directly from glucose. The “+1” chemistry is a valuable metabolic tool in addition to the “+5” chemistry and “+2” chemistry for the biosynthesis of isoprenoids, fatty acids, or polyketides. PMID:22242720
Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan
2011-10-01
Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.
A spatial operator algebra for manipulator modeling and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.; Milman, M.
1988-01-01
A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.
Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers.
Alonso, Roberto; Monroy, Raúl; Trejo, Luis A
2016-08-17
The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers.
Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers
Alonso, Roberto; Monroy, Raúl; Trejo, Luis A.
2016-01-01
The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers. PMID:27548169
Bootstrapping on Undirected Binary Networks Via Statistical Mechanics
NASA Astrophysics Data System (ADS)
Fushing, Hsieh; Chen, Chen; Liu, Shan-Yu; Koehl, Patrice
2014-09-01
We propose a new method inspired from statistical mechanics for extracting geometric information from undirected binary networks and generating random networks that conform to this geometry. In this method an undirected binary network is perceived as a thermodynamic system with a collection of permuted adjacency matrices as its states. The task of extracting information from the network is then reformulated as a discrete combinatorial optimization problem of searching for its ground state. To solve this problem, we apply multiple ensembles of temperature regulated Markov chains to establish an ultrametric geometry on the network. This geometry is equipped with a tree hierarchy that captures the multiscale community structure of the network. We translate this geometry into a Parisi adjacency matrix, which has a relative low energy level and is in the vicinity of the ground state. The Parisi adjacency matrix is then further optimized by making block permutations subject to the ultrametric geometry. The optimal matrix corresponds to the macrostate of the original network. An ensemble of random networks is then generated such that each of these networks conforms to this macrostate; the corresponding algorithm also provides an estimate of the size of this ensemble. By repeating this procedure at different scales of the ultrametric geometry of the network, it is possible to compute its evolution entropy, i.e. to estimate the evolution of its complexity as we move from a coarse to a fine description of its geometric structure. We demonstrate the performance of this method on simulated as well as real data networks.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
Recursion equations in predicting band width under gradient elution.
Liang, Heng; Liu, Ying
2004-06-18
The evolution of solute zone under gradient elution is a typical problem of non-linear continuity equation since the local diffusion coefficient and local migration velocity of the mass cells of solute zones are the functions of position and time due to space- and time-variable mobile phase composition. In this paper, based on the mesoscopic approaches (Lagrangian description, the continuity theory and the local equilibrium assumption), the evolution of solute zones in space- and time-dependent fields is described by the iterative addition of local probability density of the mass cells of solute zones. Furthermore, on macroscopic levels, the recursion equations have been proposed to simulate zone migration and spreading in reversed-phase high-performance liquid chromatography (RP-HPLC) through directly relating local retention factor and local diffusion coefficient to local mobile phase concentration. This new approach differs entirely from the traditional theories on plate concept with Eulerian description, since band width recursion equation is actually the accumulation of local diffusion coefficients of solute zones to discrete-time slices. Recursion equations and literature equations were used in dealing with same experimental data in RP-HPLC, and the comparison results show that the recursion equations can accurately predict band width under gradient elution.
Recursive multibody dynamics and discrete-time optimal control
NASA Technical Reports Server (NTRS)
Deleuterio, G. M. T.; Damaren, C. J.
1989-01-01
A recursive algorithm is developed for the solution of the simulation dynamics problem for a chain of rigid bodies. Arbitrary joint constraints are permitted, that is, joints may allow translational and/or rotational degrees of freedom. The recursive procedure is shown to be identical to that encountered in a discrete-time optimal control problem. For each relevant quantity in the multibody dynamics problem, there exists an analog in the context of optimal control. The performance index that is minimized in the control problem is identified as Gibbs' function for the chain of bodies.
A decoupled recursive approach for constrained flexible multibody system dynamics
NASA Technical Reports Server (NTRS)
Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung
1989-01-01
A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.
Contribution of zonal harmonics to gravitational moment
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.
1991-01-01
It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.
Contribution of zonal harmonics to gravitational moment
NASA Astrophysics Data System (ADS)
Roithmayr, Carlos M.
1991-02-01
It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.
Recursive Directional Ligation Approach for Cloning Recombinant Spider Silks.
Dinjaski, Nina; Huang, Wenwen; Kaplan, David L
2018-01-01
Recent advances in genetic engineering have provided a route to produce various types of recombinant spider silks. Different cloning strategies have been applied to achieve this goal (e.g., concatemerization, step-by-step ligation, recursive directional ligation). Here we describe recursive directional ligation as an approach that allows for facile modularity and control over the size of the genetic cassettes. This approach is based on sequential ligation of genetic cassettes (monomers) where the junctions between them are formed without interrupting key gene sequences with additional base pairs.
Recursive Construction of Noiseless Subsystem for Qudits
NASA Astrophysics Data System (ADS)
Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing
2014-03-01
When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.
Parallel scheduling of recursively defined arrays
NASA Technical Reports Server (NTRS)
Myers, T. J.; Gokhale, M. B.
1986-01-01
A new method of automatic generation of concurrent programs which constructs arrays defined by sets of recursive equations is described. It is assumed that the time of computation of an array element is a linear combination of its indices, and integer programming is used to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system.
Connolly, Patrick; van Deventer, Vasi
2017-01-01
The present paper argues that a systems theory epistemology (and particularly the notion of hierarchical recursive organization) provides the critical theoretical context within which the significance of Friston's (2010a) Free Energy Principle (FEP) for both evolution and psychoanalysis is best understood. Within this perspective, the FEP occupies a particular level of the hierarchical organization of the organism, which is the level of biological self-organization. This form of biological self-organization is in turn understood as foundational and pervasive to the higher levels of organization of the human organism that are of interest to both neuroscience as well as psychoanalysis. Consequently, central psychoanalytic claims should be restated, in order to be located in their proper place within a hierarchical recursive organization of the (situated) organism. In light of the FEP the realization of the psychoanalytic mind by the brain should be seen in terms of the evolution of different levels of systematic organization where the concepts of psychoanalysis describe a level of hierarchical recursive organization superordinate to that of biological self-organization and the FEP. The implication of this formulation is that while “psychoanalytic” mental processes are fundamentally subject to the FEP, they nonetheless also add their own principles of process over and above that of the FEP. A model found in Grobbelaar (1989) offers a recursive bottom-up description of the self-organization of the psychoanalytic ego as dependent on the organization of language (and affect), which is itself founded upon the tendency toward autopoiesis (self-making) within the organism, which is in turn described as formally similar to the FEP. Meaningful consilience between Grobbelaar's model and the hierarchical recursive description available in Friston's (2010a) theory is described. The paper concludes that the valuable contribution of the FEP to psychoanalysis underscores the necessity of reengagement with the core concepts of psychoanalytic theory, and the usefulness that a systems theory epistemology—particularly hierarchical recursive description—can have for this goal. PMID:29038652
Connolly, Patrick; van Deventer, Vasi
2017-01-01
The present paper argues that a systems theory epistemology (and particularly the notion of hierarchical recursive organization) provides the critical theoretical context within which the significance of Friston's (2010a) Free Energy Principle (FEP) for both evolution and psychoanalysis is best understood. Within this perspective, the FEP occupies a particular level of the hierarchical organization of the organism, which is the level of biological self-organization. This form of biological self-organization is in turn understood as foundational and pervasive to the higher levels of organization of the human organism that are of interest to both neuroscience as well as psychoanalysis. Consequently, central psychoanalytic claims should be restated, in order to be located in their proper place within a hierarchical recursive organization of the (situated) organism. In light of the FEP the realization of the psychoanalytic mind by the brain should be seen in terms of the evolution of different levels of systematic organization where the concepts of psychoanalysis describe a level of hierarchical recursive organization superordinate to that of biological self-organization and the FEP. The implication of this formulation is that while "psychoanalytic" mental processes are fundamentally subject to the FEP, they nonetheless also add their own principles of process over and above that of the FEP. A model found in Grobbelaar (1989) offers a recursive bottom-up description of the self-organization of the psychoanalytic ego as dependent on the organization of language (and affect), which is itself founded upon the tendency toward autopoiesis (self-making) within the organism, which is in turn described as formally similar to the FEP. Meaningful consilience between Grobbelaar's model and the hierarchical recursive description available in Friston's (2010a) theory is described. The paper concludes that the valuable contribution of the FEP to psychoanalysis underscores the necessity of reengagement with the core concepts of psychoanalytic theory, and the usefulness that a systems theory epistemology-particularly hierarchical recursive description-can have for this goal.
NASA Astrophysics Data System (ADS)
Lowenthal, Francis
2010-11-01
This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.
Recursive flexible multibody system dynamics using spatial operators
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1992-01-01
This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.
Cache Locality Optimization for Recursive Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lifflander, Jonathan; Krishnamoorthy, Sriram
We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Thermodynamic compensation upon binding to exosite 1 and the active site of thrombin
Treuheit, Nicholas A.; Beach, Muneera A.; Komives, Elizabeth A.
2011-01-01
Several lines of experimental evidence including amide exchange and NMR suggest that ligands binding to thrombin cause reduced backbone dynamics. Binding of the covalent inhibitor dPhe-Pro-Arg chloromethylketone to the active site serine, as well as non-covalent binding of a fragment of the regulatory protein, thrombomodulin, to exosite 1 on the back side of the thrombin molecule both cause reduced dynamics. However, the reduced dynamics do not appear to be accompanied by significant conformational changes. In addition, binding of ligands to the active site does not change the affinity of thrombomodulin fragments binding to exosite 1, however, the thermodynamic coupling between exosite 1 and the active site has not been fully explored. We present isothermal titration calorimetry experiments that probe changes in enthalpy and entropy upon formation of binary ligand complexes. The approach relies on stringent thrombin preparation methods and on the use of dansyl-L-arginine-(3-methyl-1,5-pantanediyl) amide and a DNA aptamer as ligands with ideal thermodynamic signatures for binding to the active site and to exosite 1. Using this approach, the binding thermodynamic signatures of each ligand alone as well as the binding signatures of each ligand when the other binding site was occupied were measured. Different exosite 1 ligands with widely varied thermodynamic signatures cause the same reduction in ΔH and a concomitantly lower entropy cost upon DAPA binding at the active site. The results suggest a general phenomenon of enthalpy-entropy compensation consistent with reduction of dynamics/increased folding of thrombin upon ligand binding to either the active site or to exosite 1. PMID:21526769
Hydration of dimethyldodecylamine-N-oxide: enthalpy and entropy driven processes.
Kocherbitov, Vitaly; Söderman, Olle
2006-07-13
Dimethyldodecylamine-N-oxide (DDAO) has only one polar atom that is able to interact with water. Still, this surfactant shows very hydrophilic properties: in mixtures with water, it forms normal liquid crystalline phases and micelles. Moreover, there is data in the literature indicating that the hydration of this surfactant is driven by enthalpy while other studies show that hydration of surfactants and lipids typically is driven by entropy. Sorption calorimetry allows resolving enthalpic and entropic contributions to the free energy of hydration at constant temperature and thus directly determines the driving forces of hydration. The results of the present sorption calorimetric study show that the hydration of liquid crystalline phases of DDAO is driven by entropy, except for the hydration of the liquid crystalline lamellar phase which is co-driven by enthalpy. The exothermic heat effect of the hydration of the lamellar phase arises from formation of strong hydrogen bonds between DDAO and water. Another issue is the driving forces of the phase transitions caused by the hydration. The sorption calorimetric results show that the transitions from the lamellar to cubic and from the cubic to the hexagonal phase are driven by enthalpy. Transitions from solid phases to the liquid crystalline lamellar phase are entropically driven, while the formation of the monohydrate from the dry surfactant is driven by enthalpy. The driving forces of the transition from the hexagonal phase to the isotropic solution are close to zero. These sorption calorimetric results are in good agreement with the analysis of the binary phase diagram based on the van der Waals differential equation. The phase diagram of the DDAO-water system determined using DSC and sorption calorimetry is presented.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
Van De Poll, Matthew N; Zajaczkowski, Esmi L; Taylor, Gavin J; Srinivasan, Mandyam V; van Swinderen, Bruno
2015-11-01
Closed-loop paradigms provide an effective approach for studying visual choice behaviour and attention in small animals. Different flying and walking paradigms have been developed to investigate behavioural and neuronal responses to competing stimuli in insects such as bees and flies. However, the variety of stimulus choices that can be presented over one experiment is often limited. Current choice paradigms are mostly constrained as single binary choice scenarios that are influenced by the linear structure of classical conditioning paradigms. Here, we present a novel behavioural choice paradigm that allows animals to explore a closed geometry of interconnected binary choices by repeatedly selecting among competing objects, thereby revealing stimulus preferences in an historical context. We used our novel paradigm to investigate visual flicker preferences in honeybees (Apis mellifera) and found significant preferences for 20-25 Hz flicker and avoidance of higher (50-100 Hz) and lower (2-4 Hz) flicker frequencies. Similar results were found when bees were presented with three simultaneous choices instead of two, and when they were given the chance to select previously rejected choices. Our results show that honeybees can discriminate among different flicker frequencies and that their visual preferences are persistent even under different experimental conditions. Interestingly, avoided stimuli were more attractive if they were novel, suggesting that novelty salience can override innate preferences. Our recursive virtual reality environment provides a new approach to studying visual discrimination and choice behaviour in animals. © 2015. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Sc–Zr–Nb–Rh–Pd and Sc–Zr–Nb–Ta–Rh–Pd High-Entropy Alloy Superconductors on a CsCl-Type Lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolze, Karoline; Tao, Jing; von Rohr, Fabian O.
We have synthesized previously unreported High-Entropy Alloys (HEAs) in the pentanary (ScZrNb) 1-x[RhPd] x and hexanary (ScZrNbTa) 1-x[RhPd] x systems. The materials have CsCl-type structures and mixed site occupancies. Both HEAs are type-II superconductors with strongly varying critical temperatures (T cs) depending on the valence electron count (VEC); the T cs increase monotonically with decreasing VEC within each series, and do not follow the trends seen for either crystalline or amorphous transition metal superconductors. The (ScZrNb) 0.65[RhPd] 0.35 HEA with the highest T c, ~9.3 K, also exhibits the largest µ 0H c2(0) = 10.7 T. The pentanary and hexanarymore » HEAs have higher superconducting transition tempera-tures than their simple binary intermetallic relatives with the CsCl-type structure and a surprisingly ductile mechanical behavior. The presence of niobium, even at the 20% level, has a positive impact on the T c. Nevertheless, niobium-free (ScZr) 0.50[RhPd] 0.50, as mother-compound of both superconducting HEAs found here, is itself superconducting, proving that superconductivity is an intrinsic feature of the bulk material.« less
Dermatas, Evangelos
2015-01-01
A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357
Sc–Zr–Nb–Rh–Pd and Sc–Zr–Nb–Ta–Rh–Pd High-Entropy Alloy Superconductors on a CsCl-Type Lattice
Stolze, Karoline; Tao, Jing; von Rohr, Fabian O.; ...
2018-01-17
We have synthesized previously unreported High-Entropy Alloys (HEAs) in the pentanary (ScZrNb) 1-x[RhPd] x and hexanary (ScZrNbTa) 1-x[RhPd] x systems. The materials have CsCl-type structures and mixed site occupancies. Both HEAs are type-II superconductors with strongly varying critical temperatures (T cs) depending on the valence electron count (VEC); the T cs increase monotonically with decreasing VEC within each series, and do not follow the trends seen for either crystalline or amorphous transition metal superconductors. The (ScZrNb) 0.65[RhPd] 0.35 HEA with the highest T c, ~9.3 K, also exhibits the largest µ 0H c2(0) = 10.7 T. The pentanary and hexanarymore » HEAs have higher superconducting transition tempera-tures than their simple binary intermetallic relatives with the CsCl-type structure and a surprisingly ductile mechanical behavior. The presence of niobium, even at the 20% level, has a positive impact on the T c. Nevertheless, niobium-free (ScZr) 0.50[RhPd] 0.50, as mother-compound of both superconducting HEAs found here, is itself superconducting, proving that superconductivity is an intrinsic feature of the bulk material.« less
On the Hosoya index of a family of deterministic recursive trees
NASA Astrophysics Data System (ADS)
Chen, Xufeng; Zhang, Jingyuan; Sun, Weigang
2017-01-01
In this paper, we calculate the Hosoya index in a family of deterministic recursive trees with a special feature that includes new nodes which are connected to existing nodes with a certain rule. We then obtain a recursive solution of the Hosoya index based on the operations of a determinant. The computational complexity of our proposed algorithm is O(log2 n) with n being the network size, which is lower than that of the existing numerical methods. Finally, we give a weighted tree shrinking method as a graphical interpretation of the recurrence formula for the Hosoya index.
Selective object encryption for privacy protection
NASA Astrophysics Data System (ADS)
Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos
2009-05-01
This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.
NASA Astrophysics Data System (ADS)
Tortora, Maxime M. C.; Doye, Jonathan P. K.
2017-12-01
We detail the application of bounding volume hierarchies to accelerate second-virial evaluations for arbitrary complex particles interacting through hard and soft finite-range potentials. This procedure, based on the construction of neighbour lists through the combined use of recursive atom-decomposition techniques and binary overlap search schemes, is shown to scale sub-logarithmically with particle resolution in the case of molecular systems with high aspect ratios. Its implementation within an efficient numerical and theoretical framework based on classical density functional theory enables us to investigate the cholesteric self-assembly of a wide range of experimentally relevant particle models. We illustrate the method through the determination of the cholesteric behavior of hard, structurally resolved twisted cuboids, and report quantitative evidence of the long-predicted phase handedness inversion with increasing particle thread angles near the phenomenological threshold value of 45°. Our results further highlight the complex relationship between microscopic structure and helical twisting power in such model systems, which may be attributed to subtle geometric variations of their chiral excluded-volume manifold.
NASA Astrophysics Data System (ADS)
Tan, Zhi-Zhong
2017-03-01
We study a problem of two-point resistance in a non-regular m × n cylindrical network with a zero resistor axis and two arbitrary boundaries by means of the Recursion-Transform method. This is a new problem never solved before, the Green’s function technique and the Laplacian matrix approach are invalid in this case. A disordered network with arbitrary boundaries is a basic model in many physical systems or real world systems, however looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of the arbitrary boundaries, the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain a general resistance formula of a non-regular m × n cylindrical network, which is composed of a single summation. Further, the current distribution is given explicitly as a byproduct of the method. As applications, several interesting results are derived by making special cases from the general formula. Supported by the Natural Science Foundation of Jiangsu Province under Grant No. BK20161278
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
NASA Astrophysics Data System (ADS)
Krishnan, M.; Bhowmik, B.; Tiwari, A. K.; Hazra, B.
2017-08-01
In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using recursive principal component analysis (RPCA) in conjunction with online damage indicators is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal modes in online using the rank-one perturbation method, and subsequently utilized to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/nonlinear-states that indicate damage. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. An online condition indicator (CI) based on the L2 norm of the error between actual response and the response projected using recursive eigenvector matrix updates over successive iterations is proposed. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data. The proposed CI, named recursive residual error, is also adopted for simultaneous spatio-temporal damage detection. Numerical simulations performed on five-degree of freedom nonlinear system under white noise and El Centro excitations, with different levels of nonlinearity simulating the damage scenarios, demonstrate the robustness of the proposed algorithm. Successful results obtained from practical case studies involving experiments performed on a cantilever beam subjected to earthquake excitation, for full sensors and underdetermined cases; and data from recorded responses of the UCLA Factor building (full data and its subset) demonstrate the efficacy of the proposed methodology as an ideal candidate for real-time, reference free structural health monitoring.
Recursive Vocal Pattern Learning and Generalization in Starlings
ERIC Educational Resources Information Center
Bloomfield, Tiffany Corinna
2012-01-01
Among known communication systems, human language alone exhibits open-ended productivity of meaning. Interest in the psychological mechanisms supporting this ability, and their evolutionary origins, has resurged following the suggestion that the only uniquely human ability underlying language is a mechanism of recursion. This "Unique…
ERIC Educational Resources Information Center
Simons, C. S.; Wright, M.
2007-01-01
With Simson's 1753 paper as a starting point, the current paper reports investigations of Simson's identity (also known as Cassini's) for the Fibonacci sequence as a means to explore some fundamental ideas about recursion. Simple algebraic operations allow one to reduce the standard linear Fibonacci recursion to the nonlinear Simon's recursion…
NASA Astrophysics Data System (ADS)
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
A Note on Discrete Mathematics and Calculus.
ERIC Educational Resources Information Center
O'Reilly, Thomas J.
1987-01-01
Much of the current literature on the topic of discrete mathematics and calculus during the first two years of an undergraduate mathematics curriculum is cited. A relationship between the recursive integration formulas and recursively defined polynomials is described. A Pascal program is included. (Author/RH)
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.
2017-12-01
A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.
Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.
2018-06-01
A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2 D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.
First Principles Calculations of Transition Metal Binary Alloys: Phase Stability and Surface Effects
NASA Astrophysics Data System (ADS)
Aspera, Susan Meñez; Arevalo, Ryan Lacdao; Shimizu, Koji; Kishida, Ryo; Kojima, Kazuki; Linh, Nguyen Hoang; Nakanishi, Hiroshi; Kasai, Hideaki
2017-06-01
The phase stability and surface effects on binary transition metal nano-alloy systems were investigated using density functional theory-based first principles calculations. In this study, we evaluated the cohesive and alloying energies of six binary metal alloy bulk systems that sample each type of alloys according to miscibility, i.e., Au-Ag and Pd-Ag for the solid solution-type alloys (SS), Pd-Ir and Pd-Rh for the high-temperature solid solution-type alloys (HTSS), and Au-Ir and Ag-Rh for the phase-separation (PS)-type alloys. Our results and analysis show consistency with experimental observations on the type of materials in the bulk phase. Varying the lattice parameter was also shown to have an effect on the stability of the bulk mixed alloy system. It was observed, particularly for the PS- and HTSS-type materials, that mixing gains energy from the increasing lattice constant. We furthermore evaluated the surface effects, which is an important factor to consider for nanoparticle-sized alloys, through analysis of the (001) and (111) surface facets. We found that the stability of the surface depends on the optimization of atomic positions and segregation of atoms near/at the surface, particularly for the HTSS and the PS types of metal alloys. Furthermore, the increase in energy for mixing atoms at the interface of the atomic boundaries of PS- and HTSS-type materials is low enough to overcome by the gain in energy through entropy. These, therefore, are the main proponents for the possibility of mixing alloys near the surface.
Jet Precession Driven by a Supermassive Black Hole Binary System in the BL Lac Object PG 1553+113
NASA Astrophysics Data System (ADS)
Caproni, Anderson; Abraham, Zulema; Motter, Juliana Cristina; Monteiro, Hektor
2017-12-01
The recent discovery of a roughly simultaneous periodic variability in the light curves of the BL Lac object PG 1553+113 at several electromagnetic bands represents the first case of such odd behavior reported in the literature. Motivated by this, we analyzed 15 GHz interferometric maps of the parsec-scale radio jet of PG 1553+113 to verify the presence of a possible counterpart of this periodic variability. We used the Cross-entropy statistical technique to obtain the structural parameters of the Gaussian components present in the radio maps of this source. We kinematically identified seven jet components formed coincidentally with flare-like features seen in the γ-ray light curve. From the derived jet component positions in the sky plane and their kinematics (ejection epochs, proper motions, and sky position angles), we modeled their temporal changes in terms of a relativistic jet that is steadily precessing in time. Our results indicate a precession period in the observer’s reference frame of 2.24 ± 0.03 years, compatible with the periodicity detected in the light curves of PG 1553+113. However, the maxima of the jet Doppler boosting factor are systematically delayed relative to the peaks of the main γ-ray flares. We propose two scenarios that could explain this delay, both based on the existence of a supermassive black hole binary system in PG 1553+113. We estimated the characteristics of this putative binary system that also would be responsible for driving the inferred jet precession.
Domańska, Urszula; Królikowska, Marta; Walczak, Klaudia
2014-01-01
The effects of temperature and composition on the density and viscosity of pure benzothiophene and ionic liquid (IL), and those of the binary mixtures containing the IL 1-butyl-1-methylpyrrolidynium tricyanomethanide ([BMPYR][TCM] + benzothiophene), are reported at six temperatures (308.15, 318.15, 328.15, 338.15, 348.15 and 358.15) K and ambient pressure. The temperature dependences of the density and viscosity were represented by an empirical second-order polynomial and by the Vogel-Fucher-Tammann equation, respectively. The density and viscosity variations with compositions were described by polynomials. Excess molar volumes and viscosity deviations were calculated and correlated by Redlich-Kister polynomial expansions. The surface tensions of benzothiophene, pure IL and binary mixtures of ([BMPYR][TCM] + benzothiophene) were measured at atmospheric pressure at four temperatures (308.15, 318.15, 328.15 and 338.15) K. The surface tension deviations were calculated and correlated by a Redlich-Kister polynomial expansion. The temperature dependence of the interfacial tension was used to evaluate the surface entropy, the surface enthalpy, the critical temperature, the surface energy and the parachor for pure IL. These measurements have been provided to complete information of the influence of temperature and composition on physicochemical properties for the selected IL, which was chosen as a possible new entrainer in the separation of sulfur compounds from fuels. A qualitative analysis on these quantities in terms of molecular interactions is reported. The obtained results indicate that IL interactions with benzothiophene are strongly dependent on packing effects and hydrogen bonding of this IL with the polar solvent.
Cho, Pyeong Whan; Szkudlarek, Emily; Tabor, Whitney
2016-01-01
Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned—in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or “artificial grammar”) learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, anbn, and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars. PMID:27375543
Cho, Pyeong Whan; Szkudlarek, Emily; Tabor, Whitney
2016-01-01
Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned-in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or "artificial grammar") learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, a (n) b (n) , and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars.
Adiabatic Mass Loss Model in Binary Stars
NASA Astrophysics Data System (ADS)
Ge, H. W.
2012-07-01
Rapid mass transfer process in the interacting binary systems is very complicated. It relates to two basic problems in the binary star evolution, i.e., the dynamically unstable Roche-lobe overflow and the common envelope evolution. Both of the problems are very important and difficult to be modeled. In this PhD thesis, we focus on the rapid mass loss process of the donor in interacting binary systems. The application to the criterion of dynamically unstable mass transfer and the common envelope evolution are also included. Our results based on the adiabatic mass loss model could be used to improve the binary evolution theory, the binary population synthetic method, and other related aspects. We build up the adiabatic mass loss model. In this model, two approximations are included. The first one is that the energy generation and heat flow through the stellar interior can be neglected, hence the restructuring is adiabatic. The second one is that he stellar interior remains in hydrostatic equilibrium. We model this response by constructing model sequences, beginning with a donor star filling its Roche lobe at an arbitrary point in its evolution, holding its specific entropy and composition profiles fixed. These approximations are validated by the comparison with the time-dependent binary mass transfer calculations and the polytropic model for low mass zero-age main-sequence stars. In the dynamical time scale mass transfer, the adiabatic response of the donor star drives it to expand beyond its Roche lobe, leading to runaway mass transfer and the formation of a common envelope with its companion star. For donor stars with surface convection zones of any significant depth, this runaway condition is encountered early in mass transfer, if at all; but for main sequence stars with radiative envelopes, it may be encountered after a prolonged phase of thermal time scale mass transfer, so-called delayed dynamical instability. We identify the critical binary mass ratio for the onset of dynamical time scale mass transfer; if the ratio of donor to accretor masses exceeds this critical value, the dynamical time scale mass transfer ensues. The grid of criterion for all stars can be used to be the basic input as the binary population synthetic method, which will be improved absolutely. In common envelope evolution, the dissipation of orbital energy of the binary provides the energy to eject the common envelope; the energy budget for this process essentially consists of the initial orbital energy of the binary and the initial binding energies of the binary components. We emphasize that, because stellar core and envelope contribute mutually to each other's gravitational potential energy, proper evaluation of the total energy of a star requires integration over the entire stellar interior, not the ejected envelope alone as commonly assumed. We show that the change in total energy of the donor star, as a function of its remaining mass along an adiabatic mass-loss sequence, can be calculated. This change in total energy of the donor star, combined with the requirement that both remnant donor and its companion star fit within their respective Roche lobes, then circumscribes energetically possible survivors of common envelope evolution. It is the first time that we can calculate the accurate total energy of the donor star in common envelope evolution, while the results with the old method are inconsistent with observations.
Recursions for the exchangeable partition function of the seedbank coalescent.
Kurt, Noemi; Rafler, Mathias
2017-04-01
For the seedbank coalescent with mutation under the infinite alleles assumption, which describes the gene genealogy of a population with a strong seedbank effect subject to mutations, we study the distribution of the final partition with mutation. This generalizes the coalescent with freeze by Dong et al. (2007) to coalescents where ancestral lineages are blocked from coalescing. We derive an implicit recursion which we show to have a unique solution and give an interpretation in terms of absorption problems of a random walk. Moreover, we derive recursions for the distribution of the number of blocks in the final partition. Copyright © 2017 Elsevier Inc. All rights reserved.
Adaptable Iterative and Recursive Kalman Filter Schemes
NASA Technical Reports Server (NTRS)
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
Tree-manipulating systems and Church-Rosser theorems.
NASA Technical Reports Server (NTRS)
Rosen, B. K.
1973-01-01
Study of a broad class of tree-manipulating systems called subtree replacement systems. The use of this framework is illustrated by general theorems analogous to the Church-Rosser theorem and by applications of these theorems. Sufficient conditions are derived for the Church-Rosser property, and their applications to recursive definitions, the lambda calculus, and parallel programming are discussed. McCarthy's (1963) recursive calculus is extended by allowing a choice between call-by-value and call-by-name. It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. It is also shown that these functions solve their defining equations in a 'canonical' manner.
ERIC Educational Resources Information Center
Camp, Dane R.
1991-01-01
After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…
The Free Energy in the Derrida-Retaux Recursive Model
NASA Astrophysics Data System (ADS)
Hu, Yueyun; Shi, Zhan
2018-05-01
We are interested in a simple max-type recursive model studied by Derrida and Retaux (J Stat Phys 156:268-290, 2014) in the context of a physics problem, and find a wide range for the exponent in the free energy in the nearly supercritical regime.
Calculation of shear viscosity using Green-Kubo relations within a parton cascade
NASA Astrophysics Data System (ADS)
Wesp, C.; El, A.; Reining, F.; Xu, Z.; Bouras, I.; Greiner, C.
2011-11-01
The shear viscosity of a gluon gas is calculated using the Green-Kubo relation. Time correlations of the energy-momentum tensor in thermal equilibrium are extracted from microscopic simulations using a parton cascade solving various Boltzmann collision processes. We find that the perturbation-QCD- (pQCD-) based gluon bremsstrahlung described by Gunion-Bertsch processes significantly lowers the shear viscosity by a factor of 3 to 8 compared to elastic scatterings. The shear viscosity scales with the coupling as η˜1/[αs2log(1/αs)]. For constant αs the shear viscosity to entropy density ratio η/s has no dependence on temperature. Replacing the pQCD-based collision angle distribution of binary scatterings by an isotropic form decreases the shear viscosity by a factor of 3.
Weak turbulence theory for beam-plasma interaction
NASA Astrophysics Data System (ADS)
Yoon, Peter H.
2018-01-01
The kinetic theory of weak plasma turbulence, of which Ronald C. Davidson was an important early pioneer [R. C. Davidson, Methods in Nonlinear Plasma Theory, (Academic Press, New York, 1972)], is a venerable and valid theory that may be applicable to a large number of problems in both laboratory and space plasmas. This paper applies the weak turbulence theory to the problem of gentle beam-plasma interaction and Langmuir turbulence. It is shown that the beam-plasma interaction undergoes various stages of physical processes starting from linear instability, to quasilinear saturation, to mode coupling that takes place after the quasilinear stage, followed by a state of quasi-static "turbulent equilibrium." The long term quasi-equilibrium stage is eventually perturbed by binary collisional effects in order to bring the plasma to a thermodynamic equilibrium with increased entropy.
NASA Astrophysics Data System (ADS)
Kunsági-Máté, Sándor; Ortmann, Erika; Kollár, László; Nikfardjam, Martin Pour
2008-09-01
The complex formation of malvidin-3- O-glucoside with several polyphenols, the so-called "copigmentation" phenomenon, was studied in aqueous solutions. To simulate the copigmentation process during fermentation, the stability of the formed complexes was examined in dependence of the ethanol content of the aqueous solution. Results indicate that stronger and larger complexes are formed, when the ethanol content exceeds a critical margin of 8 vol.% However, the size of complexes of malvidin/procyanidin and malvidin/epicatechin is drastically reduced above this critical concentration. Fluorescence lifetime and solvent relaxation measurements give insight into the particular processes at molecular level and will help us comprehend the first important steps during winemaking in order to recommend an optimized winemaking technology to ensure extraordinary colour stability in red wines.
User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B. P.
1982-01-01
PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2010-01-01
A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.
A Recursive Theory for the Mathematical Understanding--Some Elements and Implications.
ERIC Educational Resources Information Center
Pirie, Susan; Kieren, Thomas
There has been considerable interest in mathematical understanding. Both those attempting to build, and those questioning the possibility of building intelligent artificial tutoring systems, struggle with the notions of mathematical understanding. The purpose of this essay is to show a transcendently recursive theory of mathematical understanding…
ERIC Educational Resources Information Center
Kemp, Andy
2007-01-01
"Geomlab" is a functional programming language used to describe pictures that are made up of tiles. The beauty of "Geomlab" is that it introduces students to recursion, a very powerful mathematical concept, through a very simple and enticing graphical environment. Alongside the software is a series of eight worksheets which lead into producing…
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.
Recognition of dual targets by a molecular beacon-based sensor: subtyping of influenza A virus.
Lee, Chun-Ching; Liao, Yu-Chieh; Lai, Yu-Hsuan; Lee, Chang-Chun David; Chuang, Min-Chieh
2015-01-01
A molecular beacon (MB)-based sensor to offer a decisive answer in combination with information originated from dual-target inputs is designed. The system harnesses an assistant strand and thermodynamically favored designation of unpaired nucleotides (UNs) to process the binary targets in "AND-gate" format and report fluorescence in "off-on" mechanism via a formation of a DNA four-way junction (4WJ). By manipulating composition of the UNs, the dynamic fluorescence difference between the binary targets-coexisting circumstance and any other scenario was maximized. Characteristic equilibrium constant (K), change of entropy (ΔS), and association rate constant (k) between the association ("on") and dissociation ("off") states of the 4WJ were evaluated to understand unfolding behavior of MB in connection to its sensing capability. Favorable MB and UNs were furthermore designed toward analysis of genuine genetic sequences of hemagglutinin (HA) and neuraminidase (NA) in an influenza A H5N2 isolate. The MB-based sensor was demonstrated to yield a linear calibration range from 1.2 to 240 nM and detection limit of 120 pM. Furthermore, high-fidelity subtyping of influenza virus was implemented in a sample of unpurified amplicons. The strategy opens an alternative avenue of MB-based sensors for dual targets toward applications in clinical diagnosis.
NASA Astrophysics Data System (ADS)
Sharma, Mohit K.; Yadav, Kavita; Mukherjee, K.
2018-05-01
The binary intermetallic compound Er5Pd2 has been investigated using dc and ac magnetic susceptibilities, magnetic memory effect, isothermal magnetization, non-linear dc susceptibility, heat capacity and magnetocaloric effect studies. Interestingly, even though the compound does not show geometrical frustration it undergoes glassy magnetic phase transition below 17.2 K. Investigation of dc magnetization and heat capacity data divulged absence of long-ranged magnetic ordering. Through the magnetic memory effect, time dependent magnetization and ac susceptibility studies it was revealed that the compound undergoes glass-like freezing below 17.2 K. Analysis of frequency dependence of this transition temperature through scaling and Arrhenius law; along with the Mydosh parameter indicate, that the dynamics in Er5Pd2 are due to the presence of strongly interacting superspins rather than individual spins. This phase transition was further investigated by non-linear dc susceptibility and was characterized by static critical exponents γ and δ. Our results indicate that this compound shows the signature of superspin glass at low temperature. Additionally, both conventional and inverse magnetocaloric effect was observed with a large value of magnetic entropy change and relative cooling power. Our results suggest that Er5Pd2 can be classified as a superspin glass system with large magnetocaloric effect.
Recursion and the Competence/Performance Distinction in AGL Tasks
ERIC Educational Resources Information Center
Lobina, David J.
2011-01-01
The term "recursion" is used in at least four distinct theoretical senses within cognitive science. Some of these senses in turn relate to the different levels of analysis described by David Marr some 20 years ago; namely, the underlying competence capacity (the "computational" level), the performance operations used in real-time processing (the…
Recursivity: A Working Paper on Rhetoric and "Mnesis"
ERIC Educational Resources Information Center
Stormer, Nathan
2013-01-01
This essay proposes the genealogical study of remembering and forgetting as recursive rhetorical capacities that enable discourse to place itself in an ever-changing present. "Mnesis" is a meta-concept for the arrangements of remembering and forgetting that enable rhetoric to function. Most of the essay defines the materiality of "mnesis", first…
Recursive Optimization of Digital Circuits
1990-12-14
Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File
ERIC Educational Resources Information Center
Chang, Huo-Tsan; Chi, Nai-Wen; Miao, Min-Chih
2007-01-01
This study explored the relationship between three-component organizational/occupational commitment and organizational/occupational turnover intention, and the reciprocal relationship between organizational and occupational turnover intention with a non-recursive model in collectivist cultural settings. We selected 177 nursing staffs out of 30…
TORTIS (Toddler's Own Recursive Turtle Interpreter System).
ERIC Educational Resources Information Center
Perlman, Radia
TORTIS (Toddler's Own Recursive Turtle Interpreter System) is a device which can be used to study or nurture the cognitive development of preschool children. The device consists of a "turtle" which the child can control by use of buttons on a control panel. The "turtle" can be made to move in prescribed directions, to take a…
Recursive Inversion By Finite-Impulse-Response Filters
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1991-01-01
Recursive approximation gives least-squares best fit to exact response. Algorithm yields finite-impulse-response approximation of unknown single-input/single-output, causal, time-invariant, linear, real system, response of which is sequence of impulses. Applicable to such system-inversion problems as suppression of echoes and identification of target from its scatter response to incident impulse.
An Accelerated Recursive Doubling Algorithm for Block Tridiagonal Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K
2014-01-01
Block tridiagonal systems of linear equations arise in a wide variety of scientific and engineering applications. Recursive doubling algorithm is a well-known prefix computation-based numerical algorithm that requires O(M^3(N/P + log P)) work to compute the solution of a block tridiagonal system with N block rows and block size M on P processors. In real-world applications, solutions of tridiagonal systems are most often sought with multiple, often hundreds and thousands, of different right hand sides but with the same tridiagonal matrix. Here, we show that a recursive doubling algorithm is sub-optimal when computing solutions of block tridiagonal systems with multiplemore » right hand sides and present a novel algorithm, called the accelerated recursive doubling algorithm, that delivers O(R) improvement when solving block tridiagonal systems with R distinct right hand sides. Since R is typically about 100 1000, this improvement translates to very significant speedups in practice. Detailed complexity analyses of the new algorithm with empirical confirmation of runtime improvements are presented. To the best of our knowledge, this algorithm has not been reported before in the literature.« less
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
Martins, Mauricio D; Fitch, W Tecumseh
2015-12-15
The relationship between linguistic syntax and action planning is of considerable interest in cognitive science because many researchers suggest that "motor syntax" shares certain key traits with language. In a recent manuscript in this journal, Vicari and Adenzato (henceforth VA) critiqued Hauser, Chomsky and Fitch's 2002 (henceforth HCF's) hypothesis that recursion is language-specific, and that its usage in other domains is parasitic on language resources. VA's main argument is that HCF's hypothesis is falsified by the fact that recursion typifies the structure of intentional action, and recursion in the domain of action is independent of language. Here, we argue that VA's argument is incomplete, and that their formalism can be contrasted with alternative frameworks that are equally consistent with existing data. Therefore their conclusions are premature without further empirical testing and support. In particular, to accept VA's argument it would be necessary to demonstrate both that humans in fact represent self-embedding in the structure of intentional action, and that language is not used to construct these representations. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
Is, Yusuf Serhat; Durdagi, Serdar; Aksoydan, Busecan; Yurtsever, Mine
2018-05-07
Monoamine oxidase (MAO) enzymes MAO-A and MAO-B play a critical role in the metabolism of monoamine neurotransmitters. Hence, MAO inhibitors are very important for the treatment of several neurodegenerative diseases such as Parkinson's disease (PD), Alzheimer's disease (AD), and amyotrophic lateral sclerosis (ALS). In this study, 256 750 molecules from Otava Green Chemical Collection were virtually screened for their binding activities as MAO-B inhibitors. Two hit molecules were identified after applying different filters such as high docking scores and selectivity to MAO-B, desired pharmacokinetic profile predictions with binary quantitative structure-activity relationship (QSAR) models. Therapeutic activity prediction as well as pharmacokinetic and toxicity profiles were investigated using MetaCore/MetaDrug platform which is based on a manually curated database of molecular interactions, molecular pathways, gene-disease associations, chemical metabolism, and toxicity information. Particular therapeutic activity and toxic effect predictions are based on the ChemTree ability to correlate structural descriptors to that property using recursive partitioning algorithm. Molecular dynamics (MD) simulations were also performed to make more detailed assessments beyond docking studies. All these calculations were made not only to determine if studied molecules possess the potential to be a MAO-B inhibitor but also to find out whether they carry MAO-B selectivity versus MAO-A. The evaluation of docking results and pharmacokinetic profile predictions together with the MD simulations enabled us to identify one hit molecule (ligand 1, Otava ID: 3463218) which displayed higher selectivity toward MAO-B than a positive control selegiline which is a commercially used drug for PD therapeutic purposes.
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Statistical physics of multicomponent alloys using KKR-CPA
Khan, Suffian N.; Staunton, Julie B.; Stocks, George Malcolm
2016-02-16
We apply variational principles from statistical physics and the Landau theory of phase transitions to multicomponent alloys using the multiple-scattering theory of Korringa-Kohn-Rostoker (KKR) and the coherent potential approximation (CPA). This theory is a multicomponent generalization of the S( 2) theory of binary alloys developed by G. M. Stocks, J. B. Staunton, D. D. Johnson and others. It is highly relevant to the chemical phase stability of high-entropy alloys as it predicts the kind and size of finite-temperature chemical fluctuations. In doing so it includes effects of rearranging charge and other electronics due to changing site occupancies. When chemical fluctuationsmore » grow without bound an absolute instability occurs and a second-order order-disorder phase transition may be inferred. The S( 2) theory is predicated on the fluctuation-dissipation theorem; thus we derive the linear response of the CPA medium to perturbations in site-dependent chemical potentials in great detail. The theory lends itself to a natural interpretation in terms of competing effects: entropy driving disorder and favorable pair interactions driving atomic ordering. Moreover, to further clarify interpretation we present results for representative ternary alloys CuAgAu, NiPdPt, RhPdAg, and CoNiCu within a frozen charge (or band-only) approximation. These results include the so-called Onsager mean field correction that extends the temperature range for which the theory is valid.« less
Towards elucidation of the mechanism of biological nanomotors
NASA Astrophysics Data System (ADS)
Zhao, Zhengyi
Biological functions such as cell mitosis, bacterial binary fission, DNA replication or repair, homologous recombination, Holliday junction resolution, viral genome packaging, and cell entry all involve biomotor-driven DNA translocation. In the past, the ubiquitous biological nanomotors were classified into two categories: linear and rotation motors. In 2013, we discovered a third type of biomotor, revolving motor without rotation. The revolving motion is further found to be widespread among many biological systems. In addition, the detailed sequential action mechanism of the ATPase ring in the phi29 dsDNA packaging motor has been elucidated: ATP binding induces a conformational entropy alternation of ATPase to a high affinity toward dsDNA; ATP hydrolysis triggers another conformational entropy change in ATPase to a low DNA affinity, by which the dsDNA substrate is pushed toward an adjacent ATPase subunit. The subunit communication is regulated by an arginine finger that extends from one ATPase subunit to the adjacent unit, resulting in an asymmetrical hexameric organization. Continuation of this process promotes the movement and revolving of the dsDNA within the hexameric ATPase ring. Coordination of all the motor components facilitate the motion direction control of the viral DNA packaging motors, and make it unusually powerful and effective. KEYWORDS: Phi29 dsDNA Packaging Motor, Bio-nanomotor, RNA Nanotechnology, DNA Translocase, One-Way Revolving, ASCE Superfamily, AAA+ Superfamily.
Jin, K.; Gao, Y. F.; Bei, H.
2017-04-07
Ternary single-phase concentrated solid solution alloys (SP-CSAs), so-called "medium entropy alloys", not only possess notable mechanical and physical properties but also form a model system linking the relatively simple binary alloys to the complex high entropy alloys. Our knowledge of their intrinsic properties is vital to understand the material behavior and to prompt future applications. To this end, three model alloys NiCoFe, NiCoCr, and NiFe-20Cr have been selected and grown as single crystals. We measured their elastic constants using an ultrasonic method, and several key materials properties, such as shear modulus, bulk modulus, elastic anisotropy, and Debye temperatures have beenmore » derived. Furthermore, nanoindentation tests have been performed on these three alloys together with Ni, NiCo and NiFe on their (100) surface, to investigate the strengthening mechanisms. NiCoCr has the highest hardness, NiFe, NiCoFe and NiFe-20Cr share a similar hardness that is apparently lower than NiCoCr; NiCo has the lowest hardness in the alloys, which is similar to elemental Ni. The Labusch-type solid solution model has been applied to interpret the nanoindentation data, with two approaches used to calculate the lattice mismatch. Finally, by adopting an interatomic spacing matrix method, the Labusch model can reasonably predict the hardening effects for the whole set of materials.« less
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.
1988-01-01
This report advances a linear operator approach for analyzing the dynamics of systems of joint-connected rigid bodies.It is established that the mass matrix M for such a system can be factored as M=(I+H phi L)D(I+H phi L) sup T. This yields an immediate inversion M sup -1=(I-H psi L) sup T D sup -1 (I-H psi L), where H and phi are given by known link geometric parameters, and L, psi and D are obtained recursively by a spatial discrete-step Kalman filter and by the corresponding Riccati equation associated with this filter. The factors (I+H phi L) and (I-H psi L) are lower triangular matrices which are inverses of each other, and D is a diagonal matrix. This factorization and inversion of the mass matrix leads to recursive algortihms for forward dynamics based on spatially recursive filtering and smoothing. The primary motivation for advancing the operator approach is to provide a better means to formulate, analyze and understand spatial recursions in multibody dynamics. This is achieved because the linear operator notation allows manipulation of the equations of motion using a very high-level analytical framework (a spatial operator algebra) that is easy to understand and use. Detailed lower-level recursive algorithms can readily be obtained for inspection from the expressions involving spatial operators. The report consists of two main sections. In Part 1, the problem of serial chain manipulators is analyzed and solved. Extensions to a closed-chain system formed by multiple manipulators moving a common task object are contained in Part 2. To retain ease of exposition in the report, only these two types of multibody systems are considered. However, the same methods can be easily applied to arbitrary multibody systems formed by a collection of joint-connected regid bodies.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
ERIC Educational Resources Information Center
Strobl, Carolin; Malley, James; Tutz, Gerhard
2009-01-01
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…
ERIC Educational Resources Information Center
Strang, Kenneth David
2009-01-01
This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
N =4 supergravity next-to-maximally-helicity-violating six-point one-loop amplitude
NASA Astrophysics Data System (ADS)
Dunbar, David C.; Perkins, Warren B.
2016-12-01
We construct the six-point, next-to-maximally-helicity-violating one-loop amplitude in N =4 supergravity using unitarity and recursion. The use of recursion requires the introduction of rational descendants of the cut-constructible pieces of the amplitude and the computation of the nonstandard factorization terms arising from the loop integrals.
On the design of recursive digital filters
NASA Technical Reports Server (NTRS)
Shenoi, K.; Narasimha, M. J.; Peterson, A. M.
1976-01-01
A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.
1994-03-16
105 2.10 Decidability ........ ................................ 116 3 Declaring Refinements of Recursive Data Types 165 3.1...However, when we introduce polymorphic constructors in Chapter 5, tuples will become a polymorphic data type very similar to other polymorphic data types...terminate. 0 Chapter 3 Declaring Refinements of Recursive Data Types 3.1 Introduction The previous chapter defined refinement type inference in terms of
ERIC Educational Resources Information Center
Reinertsen, Anne Beate
2014-01-01
This article is about developing school-based self-assessing recursive pedagogies and case/action research practices and/or approaches in schools, and teachers, teacher researchers and researchers simultaneously producing and theorising their own practices using second-order cybernetics as a thinking tool. It is a move towards pragmatic…
Raymond L. Czaplewski
2010-01-01
Numerous government surveys of natural resources use Post-Stratification to improve statistical efficiency, where strata are defined by full-coverage, remotely sensed data and geopolitical boundaries. Recursive Restriction Estimation, which may be considered a special case of the static Kalman filter, is an attractive alternative. It decomposes a complex estimation...
ERIC Educational Resources Information Center
Mori, Miki
2013-01-01
This article discusses my (recursive) process of theory building and the relationship between research, teaching, and theory development for graduate students. It shows how graduate students can reshape their conceptual frameworks not only through course work, but also through researching classes they teach. Specifically, while analyzing the…
Semantics Boosts Syntax in Artificial Grammar Learning Tasks with Recursion
ERIC Educational Resources Information Center
Fedor, Anna; Varga, Mate; Szathmary, Eors
2012-01-01
Center-embedded recursion (CER) in natural language is exemplified by sentences such as "The malt that the rat ate lay in the house." Parsing center-embedded structures is in the focus of attention because this could be one of the cognitive capacities that make humans distinct from all other animals. The ability to parse CER is usually…
ERIC Educational Resources Information Center
Gibbons, Pamela
1995-01-01
Describes a study that investigated individual differences in the construction of mental models of recursion in LOGO programming. The learning process was investigated from the perspective of Norman's mental models theory and employed diSessa's ontology regarding distributed, functional, and surrogate mental models, and the Luria model of brain…
NASA Technical Reports Server (NTRS)
Kelly, D. A.; Fermelia, A.; Lee, G. K. F.
1990-01-01
An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.
NASA Astrophysics Data System (ADS)
Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2018-07-01
In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.
Deciding Termination for Ancestor Match- Bounded String Rewriting Systems
NASA Technical Reports Server (NTRS)
Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2005-01-01
Termination of a string rewriting system can be characterized by termination on suitable recursively defined languages. This kind of termination criteria has been criticized for its lack of automation. In an earlier paper we have shown how to construct an automated termination criterion if the recursion is aligned with the rewrite relation. We have demonstrated the technique with Dershowitz's forward closure criterion. In this paper we show that a different approach is suitable when the recursion is aligned with the inverse of the rewrite relation. We apply this idea to Kurth's ancestor graphs and obtain ancestor match-bounded string rewriting systems. Termination is shown to be decidable for this class. The resulting method improves upon those based on match-boundedness or inverse match-boundedness.
A Note on Local Stability Conditions for Two Types of Monetary Models with Recursive Utility
NASA Astrophysics Data System (ADS)
Miyazaki, Kenji; Utsunomiya, Hitoshi
2009-09-01
This note explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility. Although Chen et al. [Chen, B.-L., M. Hsu, and C.-H. Lin, 2008, Inflation and growth: impatience and a qualitative equivalent, Journal of Money, Credit, and Banking, Vol. 40, No. 6, 1310-1323] investigated the relationship between inflation and growth in MIUF and TC models with recursive utility, they conducted only a comparative static analysis in a steady state. By establishing sufficient conditions for local stability, this note proves that impatience should be increasing in consumption and real balances. Increasing impatience, although less plausible from an empirical point of view, receives more support from a theoretical viewpoint.
Poša, Mihalj; Pilipović, Ana; Bećarević, Mirjana; Farkaš, Zita
2017-01-01
Due to a relatively small size of bile acid salts, their mixed micelles with nonionic surfactants are analysed. Of the special interests are real binary mixed micelles that are thermodynamically more stable than ideal mixed micelles. Thermodynamic stability is expressed with an excess Gibbs energy (G E ) or over an interaction parameter (β ij ). In this paper sodium salts of cholic (C) and hyodeoxycholic acid (HD) in their mixed micelles with Tween 40 (T40) are analysed by potentiometric titration and their pKa values are determined. Examined bile acids in mixed micelles with T40 have higher pKa values than free bile acids. The increase of ΔpKa acid constant of micellary bound C and HD is in a correlation with absolute values of an interaction parameter. According to an interaction parameter and an excess Gibbs energy, mixed micelle HD-T40 are thermodynamically more stable than mixed micelles C-T40. ΔpKa values are higher for mixed micelles with Tween 40 whose second building unit is HD, related to the building unit C. In both micellar systems, ΔpKa increases with the rise of a molar fraction of Tween 40 in binary mixtures of surfactants with sodium salts of bile acids. This suggests that, ΔpKa can be a measure of a thermodynamic stabilization of analysed binary mixed micelles as well as an interaction parameter. ΔpKa values are confirmed by determination of a distribution coefficient of HD and C in systems: water phase with Tween 40 in a micellar concentration and 1-octanol, with a change of a pH value of a water phase. Conformational analyses suggests that synergistic interactions between building units of analysed binary micelles originates from formation of hydrogen bonds between steroid OH groups and polyoxyethylene groups of the T40. Relative similarity and spatial orientation of C 3 and C 6 OH group allows cooperative formation of hydrogen bonds between T40 and HD - excess entropy in formation of mixed micelle. If a water solution of analysed binary mixtures of surfactants contains urea in concentration of 4M significant decreases of an interaction parameter value happens which confirms the importance of hydrogen bonds in synergistic interactions (urea compete in hydrogen bonds). Copyright © 2016 Elsevier Inc. All rights reserved.
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
Energy and enthalpy distribution functions for a few physical systems.
Wu, K L; Wei, J H; Lai, S K; Okabe, Y
2007-08-02
The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.
Magnetic properties and large reversible magnetocaloric effect in Er3Pd2
NASA Astrophysics Data System (ADS)
Maji, Bibekananda; Ray, Mayukh K.; Modak, M.; Mondal, S.; Suresh, K. G.; Banerjee, S.
2018-06-01
The magnetic properties and magnetocaloric effect (MCE) of binary intermetallic compound Er3Pd2 were studied. It exhibits a paramagnetic (PM) to antiferromagnetic (AFM) transition at Néel temperature (TN) = 10 K. A large reversible MCE was observed which is related to a second order magnetic transition from PM to AFM state. The values of maximum magnetic entropy change (- Δ SMmax) and adiabatic temperature change (Δ Tadmax) reach 8.9 J/kg-K and 2.9 K respectively for the field change of 50 kOe with no obvious hysteresis loss. The effective magnetic moment was determined to be 10.16 μB/Er3+, which is notably higher than that of free ion value of Er3+ (9.59 μB), suggests that Pd ions also have considerable amount of magnetic moments in this compound.
NASA Astrophysics Data System (ADS)
Oechslin, R.; Janka, H.-T.; Marek, A.
2007-05-01
An extended set of binary neutron star (NS) merger simulations is performed with an approximative treatment of general relativity to systematically investigate the influence of the nuclear equation of state (EoS), the NS masses, and the NS spin states prior to merging. The general relativistic hydrodynamics simulations are based on a conformally flat approximation to the Einstein equations and a Smoothed Particle Hydrodynamics code for the gas treatment. We employ the two non-zero temperature EoSs of Shen et al. (1998a, Nucl. Phys. A, 637, 435; 1998b, Prog. Theor. Phys., 100, 1013) and Lattimer & Swesty (1991, Nucl. Phys. A, 535, 331), which represent a "harder" and a "softer" behavior, respectively, with characteristic differences in the incompressibility at supernuclear densities and in the maximum mass of nonrotating, cold neutron stars. In addition, we use the cold EoS of Akmal et al. (1998, Phys. Rev. C, 58, 1804) with a simple ideal-gas-like extension according to Shibata & Taniguchi (2006, Phys. Rev. D, 73, 064027), in order to compare with their results, and an ideal-gas EoS with parameters fitted to the supernuclear part of the Shen-EoS. We estimate the mass sitting in a dilute "torus" around the future black hole (BH) by requiring the specific angular momentum of the torus matter to be larger than the angular momentum of the ISCO around a Kerr BH with the mass and spin parameter of the compact central remnant. The dynamics and outcome of the models is found to depend strongly on the EoS and on the binary parameters. Larger torus masses are found for asymmetric systems (up to 0.3 M_⊙ for a mass ratio of 0.55), for large initial NSs, and for a NS spin state which corresponds to a larger total angular momentum. We find that the postmerger remnant collapses either immediately or after a short time when employing the soft EoS of Lattimer& Swesty, whereas no sign of post-merging collapse is found within tens of dynamical timescales for all other EoSs used. The typical temperatures in the torus are found to be about 3{-}10 MeV depending on the strength of the shear motion at the collision interface between the NSs and thus depending on the initial NS spins. About 10-3{-}10-2 M_⊙ of NS matter become gravitationally unbound during or right after the merging process. This matter consists of a hot/high-entropy component from the collision interface and (only in case of asymmetric systems) of a cool/low-entropy component from the spiral arm tips. Appendices are only available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.
2016-06-01
This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
ERIC Educational Resources Information Center
Recker, Margaret M.; Pirolli, Peter
Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…
Closed-form recursive formula for an optimal tracker with terminal constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Turner, J. D.; Chun, H. M.
1984-01-01
Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. Two examples are given to illustrate the validity and usefulness of the formulations.
Relatively Recursive Rational Choice.
1981-11-01
for the decision procedure of recursively representable rational choice. Alternatively phrased, we wish to inquire into its degrees of unsolvability. We...may first make the observation that there are three classic notions of reducibility of decision procedures for subsets of the natural numbers... rational choice function defined as an effectively computable represent- ation of Richter’s [1971] concept of rational choice, attains by means of an
Recursive inversion of externally defined linear systems
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1988-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.
The Recursive Process in and of Critical Literacy: Action Research in an Urban Elementary School
ERIC Educational Resources Information Center
Cooper, Karyn; White, Robert E.
2012-01-01
This paper provides an overview of the recursive process of initiating an action research project on literacy for students-at-risk in a Canadian urban elementary school. As this paper demonstrates, this requires development of a school-wide framework, which frames the action research project and desired outcomes, and a shared ownership of this…
ERIC Educational Resources Information Center
Rey, Arnaud; Perruchet, Pierre; Fagot, Joel
2012-01-01
Influential theories have claimed that the ability for recursion forms the computational core of human language faculty distinguishing our communication system from that of other animals (Hauser, Chomsky, & Fitch, 2002). In the present study, we consider an alternative view on recursion by studying the contribution of associative and working…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstone, H.J.; Moats, R.K.
1981-04-01
With the aim of high-order calculations, a new recursive solution for the degenerate Rayleigh-Schroedinger perturbation-theory wave function and energy has been derived. The final formulas, chi/sup (N/)/sub sigma/ = R/sup () -sigma/summation/sup N/-1/sub k/ = 0 H/sup (sigma+1+k/)/sub sigma+1/chi/sup (N/-1-k), E/sup (N/+sigma) = <0Vertical BarH/sup (N/+sigma)/sub sigma+1/Vertical Bar0> + < 0Vertical Barsummation/sup N/-2/sub k/ = 0H/sup (sigma+1+k/)/sub sigma+1/ Vertical Barchi/sup (N/-1-k)>,which involve new Hamiltonian-related operators H/sup (sigma+k/)/sub sigma/ and H/sup( sigma+k/)/sub sigma/, strongly resemble the standard nondegenerate recursive formulas. As an illustration, the perturbed energy coefficients for the 3s-3d/sub 0/ states of hydrogen in the Zeeman effect have been calculatedmore » recursively through 87th order in the square of the magnetic field. Our treatment is compared with that of Hirschfelder and Certain (J. Chem. Phys. 60, 1118 (1974)), and some relative advantages of each are pointed out.« less
Experimental evaluation of a recursive model identification technique for type 1 diabetes.
Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E
2009-09-01
A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
Recursive utility in a Markov environment with stochastic growth
Hansen, Lars Peter; Scheinkman, José A.
2012-01-01
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428
NASA Astrophysics Data System (ADS)
Burke, Mark E.
2010-11-01
Dubois coined the term incursion, for an inclusive or implicit recursion, to describe a discrete-time anticipatory system which computes its future states by reference to its future states as well as its current and past states. In this paper, we look at a model which has been proposed in the context of a social system which has functionally differentiated subsystems. The model is derived from a discrete-time compartmental SIS epidemic model. We analyse a low order instance of the model both in its form as a recursion with no anticipatory capacity, and also as an incursion with associated anticipatory capacity. The properties of the incursion are compared and contrasted with those of the underlying recursion.
An iterative approach to region growing using associative memories
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Cowart, A.
1983-01-01
Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
Recursive utility in a Markov environment with stochastic growth.
Hansen, Lars Peter; Scheinkman, José A
2012-07-24
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.
A spatial operator algebra for manipulator modeling and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.; Jain, A.
1989-01-01
A spatial operator algebra for modeling the control and trajectory design of manipulation is discussed, with emphasis on its analytical formulation and implementation in the Ada programming language. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of the manipulator. Inversion is obtained using techniques of recursive filtering and smoothing. The operator alegbra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. Implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection, thus greatly simplifying the transition from an abstract problem formulation and solution to the detailed mechanization of a specific algorithm.
a Recursive Approach to Compute Normal Forms
NASA Astrophysics Data System (ADS)
HSU, L.; MIN, L. J.; FAVRETTO, L.
2001-06-01
Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.
Report to the High Order Language Working Group (HOLWG)
1977-01-14
as running, runnable, suspended or dormant, may be synchronized by semaphore variables, may be schedaled using clock and duration data types and mpy...Recursive and non-recursive routines G6. Parallel processes, synchronization , critical regions G7. User defined parameterized exception handling G8...typed and lacks extensibility, parallel processing, synchronization and real-time features. Overall Evaluation IBM strongly recommended PL/I as a
Computation of transform domain covariance matrices
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1975-01-01
It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.
Recursive inversion of externally defined linear systems by FIR filters
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1989-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.
Recursive search method for the image elements of functionally defined surfaces
NASA Astrophysics Data System (ADS)
Vyatkin, S. I.
2017-05-01
This paper touches upon the synthesis of high-quality images in real time and the technique for specifying three-dimensional objects on the basis of perturbation functions. The recursive search method for the image elements of functionally defined objects with the use of graphics processing units is proposed. The advantages of such an approach over the frame-buffer visualization method are shown.
ERIC Educational Resources Information Center
Keeney, Hillary; Keeney, Bradford
2013-01-01
The Ju/'hoan Bushman origin myth is depicted as a contextual frame for their healing and transformative ways. Using Recursive Frame Analysis, these performances are shown to be an enactment of the border crossing between First and Second Creation, that is, pre-linguistic and linguistic domains of experience. Here n/om, or the presumed creative…
Aesthetic Responses to Exact Fractals Driven by Physical Complexity
Bies, Alexander J.; Blanc-Goldhammer, Daryn R.; Boydston, Cooper R.; Taylor, Richard P.; Sereno, Margaret E.
2016-01-01
Fractals are physically complex due to their repetition of patterns at multiple size scales. Whereas the statistical characteristics of the patterns repeat for fractals found in natural objects, computers can generate patterns that repeat exactly. Are these exact fractals processed differently, visually and aesthetically, than their statistical counterparts? We investigated the human aesthetic response to the complexity of exact fractals by manipulating fractal dimensionality, symmetry, recursion, and the number of segments in the generator. Across two studies, a variety of fractal patterns were visually presented to human participants to determine the typical response to exact fractals. In the first study, we found that preference ratings for exact midpoint displacement fractals can be described by a linear trend with preference increasing as fractal dimension increases. For the majority of individuals, preference increased with dimension. We replicated these results for other exact fractal patterns in a second study. In the second study, we also tested the effects of symmetry and recursion by presenting asymmetric dragon fractals, symmetric dragon fractals, and Sierpinski carpets and Koch snowflakes, which have radial and mirror symmetry. We found a strong interaction among recursion, symmetry and fractal dimension. Specifically, at low levels of recursion, the presence of symmetry was enough to drive high preference ratings for patterns with moderate to high levels of fractal dimension. Most individuals required a much higher level of recursion to recover this level of preference in a pattern that lacked mirror or radial symmetry, while others were less discriminating. This suggests that exact fractals are processed differently than their statistical counterparts. We propose a set of four factors that influence complexity and preference judgments in fractals that may extend to other patterns: fractal dimension, recursion, symmetry and the number of segments in a pattern. Conceptualizations such as Berlyne’s and Redies’ theories of aesthetics also provide a suitable framework for interpretation of our data with respect to the individual differences that we detect. Future studies that incorporate physiological methods to measure the human aesthetic response to exact fractal patterns would further elucidate our responses to such timeless patterns. PMID:27242475
Guan, Yue; Li, Weifeng; Jiang, Zhuoran; Chen, Ying; Liu, Song; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-12-01
This study aimed to develop whole-lesion apparent diffusion coefficient (ADC)-based entropy-related parameters of cervical cancer to preliminarily assess intratumoral heterogeneity of this lesion in comparison to adjacent normal cervical tissues. A total of 51 women (mean age, 49 years) with cervical cancers confirmed by biopsy underwent 3-T pelvic diffusion-weighted magnetic resonance imaging with b values of 0 and 800 s/mm 2 prospectively. ADC-based entropy-related parameters including first-order entropy and second-order entropies were derived from the whole tumor volume as well as adjacent normal cervical tissues. Intraclass correlation coefficient, Wilcoxon test with Bonferroni correction, Kruskal-Wallis test, and receiver operating characteristic curve were used for statistical analysis. All the parameters showed excellent interobserver agreement (all intraclass correlation coefficients > 0.900). Entropy, entropy(H) 0 , entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean were significantly higher, whereas entropy(H) range and entropy(H) std were significantly lower in cervical cancers compared to adjacent normal cervical tissues (all P <.0001). Kruskal-Wallis test showed that there were no significant differences among the values of various second-order entropies including entropy(H) 0, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean. All second-order entropies had larger area under the receiver operating characteristic curve than first-order entropy in differentiating cervical cancers from adjacent normal cervical tissues. Further, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean had the same largest area under the receiver operating characteristic curve of 0.867. Whole-lesion ADC-based entropy-related parameters of cervical cancers were developed successfully, which showed initial potential in characterizing intratumoral heterogeneity in comparison to adjacent normal cervical tissues. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Lu, Ruipeng; Mucaki, Eliseos J; Rogan, Peter K
2017-03-17
Data from ChIP-seq experiments can derive the genome-wide binding specificities of transcription factors (TFs) and other regulatory proteins. We analyzed 765 ENCODE ChIP-seq peak datasets of 207 human TFs with a novel motif discovery pipeline based on recursive, thresholded entropy minimization. This approach, while obviating the need to compensate for skewed nucleotide composition, distinguishes true binding motifs from noise, quantifies the strengths of individual binding sites based on computed affinity and detects adjacent cofactor binding sites that coordinate with the targets of primary, immunoprecipitated TFs. We obtained contiguous and bipartite information theory-based position weight matrices (iPWMs) for 93 sequence-specific TFs, discovered 23 cofactor motifs for 127 TFs and revealed six high-confidence novel motifs. The reliability and accuracy of these iPWMs were determined via four independent validation methods, including the detection of experimentally proven binding sites, explanation of effects of characterized SNPs, comparison with previously published motifs and statistical analyses. We also predict previously unreported TF coregulatory interactions (e.g. TF complexes). These iPWMs constitute a powerful tool for predicting the effects of sequence variants in known binding sites, performing mutation analysis on regulatory SNPs and predicting previously unrecognized binding sites and target genes. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.
2014-04-01
Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.
NASA Astrophysics Data System (ADS)
Zheng, Lianqing; Yang, Wei
2008-07-01
Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.
Thermodynamic properties of model CdTe/CdSe mixtures
van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...
2015-02-20
We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less
On the structural and thermodynamic properties of the ?-hydrogen (?, Ce, Nd and Sm) systems
NASA Astrophysics Data System (ADS)
Blazina, Z.; Drasner, A.
1998-06-01
The 0953-8984/10/22/006/img3 (0953-8984/10/22/006/img4, Ce, Nd and Sm) intermetallic compounds were prepared and studied by means of x-ray powder diffraction. All compounds are single phase and exhibit the same hexagonal symmetry (0953-8984/10/22/006/img5 type; space group 0953-8984/10/22/006/img6) as do their prototype 0953-8984/10/22/006/img7 binaries. The interaction with hydrogen was also studied. It was found that all ternary intermetallics react readily and reversibly with hydrogen to form hydrides with high hydrogen contents of up to four hydrogen atoms per alloy formula unit. The pressure composition desorption isotherms were measured. The entropy, the enthalpy and the Gibbs free energy of formation have been extracted from the equilibrium plateau in the pressure-composition desorption isotherms. The hydrogen capacity and the equilibrium pressure of the 0953-8984/10/22/006/img3-hydrogen systems were compared with the corresponding values for their aluminium analogues and with the values for the 0953-8984/10/22/006/img7-hydrogen systems and briefly discussed. The hydride properties of gallium containing and aluminium containing compounds show great similarities whereby both series of ternary compounds form more stable hydrides and exhibit smaller hydrogen capacities than do the corresponding binaries.
Low-temperature specific heat of uranium germanides
NASA Astrophysics Data System (ADS)
Pikul, A.; Troć, R.; Czopnik, A.; Noël, H.
2014-06-01
We report measurements of the specific heat down to the lowest temperature of 2 K for the paramagnetic binaries U5Ge4 (Ti5Ga4-type) and UGe (ThIn-type) as well as for the ferromagnetic binaries U3Ge5-x (x=0.2) and UGe2-x (x=0.3) (with TC=94 and 47 K) having defect crystal structures of the AlB2- and ThSi2-type, respectively. The obtained data were compared to those of other uranium germanides which have been earlier studied: UGe2 (ZrGa2) and UGe3 (Cu3Au). Among all these germanides, only UGe exhibits enhanced electronic specific heat coefficient, γ(0), equal to 137 mJ/molUK2. This value can be compared to that derived for the most known spin fluctuator, UAl2 (143 mJ/molUK2). The other uranium germanides have less enhanced γ(0) values (27-65 mJ/molUK2). The lowest value of about 20 mJ/molUK2 was reported earlier for the typical temperature independent paramagnet UGe3. For the ferromagnetic new phase UGe2-x the inferred magnetic entropy, Sm, reaches at the Curie temperature, TC, a value of R ln 2 which corresponds to a doublet ground state of the uranium ion in this deficit digermanide.
Donor impurity incorporation during layer growth of Zn II-VI semiconductors
NASA Astrophysics Data System (ADS)
Barlow, D. A.
2017-12-01
The maximum halogen donor concentration in Zn II-VI semiconductors during layer growth is studied using a standard model from statistical mechanics. Here the driving force for incorporation is an increase in entropy upon mixing of the donor impurity into the available anion lattice sites in the host binary. A formation energy opposes this increase and thus equilibrium is attained at some maximum concentration. Considering the halogen donor impurities within the Zn II-VI binary semiconductors ZnO, ZnS, ZnSe and ZnTe, a heat of reaction obtained from reported diatomic bond strengths is shown to be directly proportional to the log of maximum donor concentration. The formation energy can then be estimated and an expression for maximum donor concentration derived. Values for the maximum donor concentration with each of the halogen impurities, within the Zn II-VI compounds, are computed. This model predicts that the halogens will serve as electron donors in these compounds in order of increasing effectiveness as: F, Br, I, Cl. Finally, this result is taken to be equivalent to an alternative model where donor concentration depends upon impurity diffusion and the conduction band energy shift due to a depletion region at the growing crystal's surface. From this, we are able to estimate the diffusion activation energy for each of the impurities mentioned above. Comparisons are made with reported values and relevant conclusions presented.
On quantum Rényi entropies: A new generalization and some properties
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco
2013-12-01
The Rényi entropies constitute a family of information measures that generalizes the well-known Shannon entropy, inheriting many of its properties. They appear in the form of unconditional and conditional entropies, relative entropies, or mutual information, and have found many applications in information theory and beyond. Various generalizations of Rényi entropies to the quantum setting have been proposed, most prominently Petz's quasi-entropies and Renner's conditional min-, max-, and collision entropy. However, these quantum extensions are incompatible and thus unsatisfactory. We propose a new quantum generalization of the family of Rényi entropies that contains the von Neumann entropy, min-entropy, collision entropy, and the max-entropy as special cases, thus encompassing most quantum entropies in use today. We show several natural properties for this definition, including data-processing inequalities, a duality relation, and an entropic uncertainty relation.
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.
2016-03-01
The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.
Expansion of all multitrace tree level EYM amplitudes
NASA Astrophysics Data System (ADS)
Du, Yi-Jian; Feng, Bo; Teng, Fei
2017-12-01
In this paper, we investigate the expansion of tree level multitrace Einstein-Yang-Mills (EYM) amplitudes. First, we propose two types of recursive expansions of tree level EYM amplitudes with an arbitrary number of gluons, gravitons and traces by those amplitudes with fewer traces or/and gravitons. Then we give many support evidence, including proofs using the Cachazo-He-Yuan (CHY) formula and Britto-Cachazo-Feng-Witten (BCFW) recursive relation. As a byproduct, two types of generalized BCJ relations for multitrace EYM are further proposed, which will be useful in the BCFW proof. After one applies the recursive expansions repeatedly, any multitrace EYM amplitudes can be given in the Kleiss-Kuijf (KK) basis of tree level color ordered Yang-Mills (YM) amplitudes. Thus the Bern-Carrasco-Johansson (BCJ) numerators, as the expansion coefficients, for all multitrace EYM amplitudes are naturally constructed.
Cooperating reduction machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kluge, W.E.
1983-11-01
This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less
Health monitoring system for transmission shafts based on adaptive parameter identification
NASA Astrophysics Data System (ADS)
Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.
2018-05-01
A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.
NASA Technical Reports Server (NTRS)
Mcclain, W. D.
1977-01-01
A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George
1993-01-01
The advection of a passive scalar by incompressible turbulence is considered using recursive renormalization group procedures in the differential sub grid shell thickness limit. It is shown explicitly that the higher order nonlinearities induced by the recursive renormalization group procedure preserve Galilean invariance. Differential equations, valid for the entire resolvable wave number k range, are determined for the eddy viscosity and eddy diffusivity coefficients, and it is shown that higher order nonlinearities do not contribute as k goes to 0, but have an essential role as k goes to k(sub c) the cutoff wave number separating the resolvable scales from the sub grid scales. The recursive renormalization transport coefficients and the associated eddy Prandtl number are in good agreement with the k-dependent transport coefficients derived from closure theories and experiments.
A recursive linear predictive vocoder
NASA Astrophysics Data System (ADS)
Janssen, W. A.
1983-12-01
A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.
Upper entropy axioms and lower entropy axioms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jin-Li, E-mail: phd5816@163.com; Suo, Qi
2015-04-15
The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover,more » different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zhenggang; Gao, Yanfei; Bei, Hongbin
To understand the underlying strengthening mechanisms, thermal activation processes are investigated from stress-strain measurements with varying temperatures and strain rates for a family of equiatomic quinary, quaternary, ternary, and binary, face-center-cubic-structured, single phase solid-solution alloys, which are all subsystems of the FeNiCoCrMn high-entropy alloy. Our analysis suggests that the Labusch-type solution strengthening mechanism, rather than the lattice friction (or lattice resistance), governs the deformation behavior in equiatomic alloys. First, upon excluding the Hall-Petch effects, the activation volumes for these alloys are found to range from 10 to 1000 times the cubic power of Burgers vector, which are much larger thanmore » that required for kink pairs (i.e., the thermal activation process for the lattice resistance mechanism in body-center-cubic-structured metals). Second, the Labusch-type analysis for an N-element alloy is conducted by treating M-elements (M < N) as an effective medium and summing the strengthening contributions from the rest of N-M elements as individual solute species. For all equiatomic alloys investigated, a qualitative agreement exists between the measured strengthening effect and the Labusch strengthening factor from arbitrary M to N elements based on the lattice and modulus mismatches. Furthermore, the Labusch strengthening factor provides a practical critique to understand and design such compositionally complex but structurally simple alloys.« less
Wu, Zhenggang; Gao, Yanfei; Bei, Hongbin
2016-11-01
To understand the underlying strengthening mechanisms, thermal activation processes are investigated from stress-strain measurements with varying temperatures and strain rates for a family of equiatomic quinary, quaternary, ternary, and binary, face-center-cubic-structured, single phase solid-solution alloys, which are all subsystems of the FeNiCoCrMn high-entropy alloy. Our analysis suggests that the Labusch-type solution strengthening mechanism, rather than the lattice friction (or lattice resistance), governs the deformation behavior in equiatomic alloys. First, upon excluding the Hall-Petch effects, the activation volumes for these alloys are found to range from 10 to 1000 times the cubic power of Burgers vector, which are much larger thanmore » that required for kink pairs (i.e., the thermal activation process for the lattice resistance mechanism in body-center-cubic-structured metals). Second, the Labusch-type analysis for an N-element alloy is conducted by treating M-elements (M < N) as an effective medium and summing the strengthening contributions from the rest of N-M elements as individual solute species. For all equiatomic alloys investigated, a qualitative agreement exists between the measured strengthening effect and the Labusch strengthening factor from arbitrary M to N elements based on the lattice and modulus mismatches. Furthermore, the Labusch strengthening factor provides a practical critique to understand and design such compositionally complex but structurally simple alloys.« less
Identifying functional thermodynamics in autonomous Maxwellian ratchets
NASA Astrophysics Data System (ADS)
Boyd, Alexander B.; Mandal, Dibyendu; Crutchfield, James P.
2016-02-01
We introduce a family of Maxwellian Demons for which correlations among information bearing degrees of freedom can be calculated exactly and in compact analytical form. This allows one to precisely determine Demon functional thermodynamic operating regimes, when previous methods either misclassify or simply fail due to approximations they invoke. This reveals that these Demons are more functional than previous candidates. They too behave either as engines, lifting a mass against gravity by extracting energy from a single heat reservoir, or as Landauer erasers, consuming external work to remove information from a sequence of binary symbols by decreasing their individual uncertainty. Going beyond these, our Demon exhibits a new functionality that erases bits not by simply decreasing individual-symbol uncertainty, but by increasing inter-bit correlations (that is, by adding temporal order) while increasing single-symbol uncertainty. In all cases, but especially in the new erasure regime, exactly accounting for informational correlations leads to tight bounds on Demon performance, expressed as a refined Second Law of thermodynamics that relies on the Kolmogorov-Sinai entropy for dynamical processes and not on changes purely in system configurational entropy, as previously employed. We rigorously derive the refined Second Law under minimal assumptions and so it applies quite broadly—for Demons with and without memory and input sequences that are correlated or not. We note that general Maxwellian Demons readily violate previously proposed, alternative such bounds, while the current bound still holds. As such, it broadly describes the minimal energetic cost of any computation by a thermodynamic system.
Roll Angle Estimation Using Thermopiles for a Flight Controlled Mortar
2012-06-01
Using Xilinx’s System generator, the entire design was implemented at a relatively high level within Malab’s Simulink. This allowed VHDL code to...thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA). These results demonstrate the...accurately estimated by processing the thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA
Closed-form recursive formula for an optimal tracker with terminal constraints
NASA Technical Reports Server (NTRS)
Juang, J. N.; Turner, J. D.; Chun, H. M.
1986-01-01
Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. An example involving the feedback slewing of a flexible spacecraft is given to illustrate the validity and usefulness of the formulations.
EEG entropy measures in anesthesia
Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli
2015-01-01
Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA. Conclusion: Each entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA. PMID:25741277
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Toward using games to teach fundamental computer science concepts
NASA Astrophysics Data System (ADS)
Edgington, Jeffrey Michael
Video and computer games have become an important area of study in the field of education. Games have been designed to teach mathematics, physics, raise social awareness, teach history and geography, and train soldiers in the military. Recent work has created computer games for teaching computer programming and understanding basic algorithms. We present an investigation where computer games are used to teach two fundamental computer science concepts: boolean expressions and recursion. The games are intended to teach the concepts and not how to implement them in a programming language. For this investigation, two computer games were created. One is designed to teach basic boolean expressions and operators and the other to teach fundamental concepts of recursion. We describe the design and implementation of both games. We evaluate the effectiveness of these games using before and after surveys. The surveys were designed to ascertain basic understanding, attitudes and beliefs regarding the concepts. The boolean game was evaluated with local high school students and students in a college level introductory computer science course. The recursion game was evaluated with students in a college level introductory computer science course. We present the analysis of the collected survey information for both games. This analysis shows a significant positive change in student attitude towards recursion and modest gains in student learning outcomes for both topics.
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baykara, N. A.
Recent studies on quantum evolutionary problems in Demiralp’s group have arrived at a stage where the construction of an expectation value formula for a given algebraic function operator depending on only position operator becomes possible. It has also been shown that this formula turns into an algebraic recursion amongst some finite number of consecutive elements in a set of expectation values of an appropriately chosen basis set over the natural number powers of the position operator as long as the function under consideration and the system Hamiltonian are both autonomous. This recursion corresponds to a denumerable infinite number of algebraicmore » equations whose solutions can or can not be obtained analytically. This idea is not completely original. There are many recursive relations amongst the expectation values of the natural number powers of position operator. However, those recursions may not be always efficient to get the system energy values and especially the eigenstate wavefunctions. The present approach is somehow improved and generalized form of those expansions. We focus on this issue for a specific system where the Hamiltonian is defined on the coordinate of a curved space instead of the Cartesian one.« less
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Electric field-induced reorganization of two-component supported bilayer membranes
Groves, Jay T.; Boxer, Steven G.; McConnell, Harden M.
1997-01-01
Application of electric fields tangent to the plane of a confined patch of fluid bilayer membrane can create lateral concentration gradients of the lipids. A thermodynamic model of this steady-state behavior is developed for binary systems and tested with experiments in supported lipid bilayers. The model uses Flory’s approximation for the entropy of mixing and allows for effects arising when the components have different molecular areas. In the special case of equal area molecules the concentration gradient reduces to a Fermi–Dirac distribution. The theory is extended to include effects from charged molecules in the membrane. Calculations show that surface charge on the supporting substrate substantially screens electrostatic interactions within the membrane. It also is shown that concentration profiles can be affected by other intermolecular interactions such as clustering. Qualitative agreement with this prediction is provided by comparing phosphatidylserine- and cardiolipin-containing membranes. PMID:9391034
Electric field-induced reorganization of two-component supported bilayer membranes.
Groves, J T; Boxer, S G; McConnell, H M
1997-12-09
Application of electric fields tangent to the plane of a confined patch of fluid bilayer membrane can create lateral concentration gradients of the lipids. A thermodynamic model of this steady-state behavior is developed for binary systems and tested with experiments in supported lipid bilayers. The model uses Flory's approximation for the entropy of mixing and allows for effects arising when the components have different molecular areas. In the special case of equal area molecules the concentration gradient reduces to a Fermi-Dirac distribution. The theory is extended to include effects from charged molecules in the membrane. Calculations show that surface charge on the supporting substrate substantially screens electrostatic interactions within the membrane. It also is shown that concentration profiles can be affected by other intermolecular interactions such as clustering. Qualitative agreement with this prediction is provided by comparing phosphatidylserine- and cardiolipin-containing membranes.
Infrared dim small target segmentation method based on ALI-PCNN model
NASA Astrophysics Data System (ADS)
Zhao, Shangnan; Song, Yong; Zhao, Yufei; Li, Yun; Li, Xu; Jiang, Yurong; Li, Lin
2017-10-01
Pulse Coupled Neural Network (PCNN) is improved by Adaptive Lateral Inhibition (ALI), while a method of infrared (IR) dim small target segmentation based on ALI-PCNN model is proposed in this paper. Firstly, the feeding input signal is modulated by lateral inhibition network to suppress background. Then, the linking input is modulated by ALI, and linking weight matrix is generated adaptively by calculating ALI coefficient of each pixel. Finally, the binary image is generated through the nonlinear modulation and the pulse generator in PCNN. The experimental results show that the segmentation effect as well as the values of contrast across region and uniformity across region of the proposed method are better than the OTSU method, maximum entropy method, the methods based on conventional PCNN and visual attention, and the proposed method has excellent performance in extracting IR dim small target from complex background.
Methods for improved forewarning of condition changes in monitoring physical processes
Hively, Lee M.
2013-04-09
This invention teaches further improvements in methods for forewarning of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves objective determination of a forewarning threshold (U.sub.FW), together with a failure-onset threshold (U.sub.FAIL) corresponding to a normalized value of a composite measure (C) of dissimilarity; and providing a visual or audible indication to a human observer of failure forewarning and/or failure onset. Another improvement relates to symbolization of the data according the binary numbers representing the slope between adjacent data points. Another improvement relates to adding measures of dissimilarity based on state-to-state dynamical changes of the system. And still another improvement relates to using a Shannon entropy as the measure of condition change in lieu of a connected or unconnected phase space.
Refined two-index entropy and multiscale analysis for complex system
NASA Astrophysics Data System (ADS)
Bian, Songhan; Shang, Pengjian
2016-10-01
As a fundamental concept in describing complex system, entropy measure has been proposed to various forms, like Boltzmann-Gibbs (BG) entropy, one-index entropy, two-index entropy, sample entropy, permutation entropy etc. This paper proposes a new two-index entropy Sq,δ and we find the new two-index entropy is applicable to measure the complexity of wide range of systems in the terms of randomness and fluctuation range. For more complex system, the value of two-index entropy is smaller and the correlation between parameter δ and entropy Sq,δ is weaker. By combining the refined two-index entropy Sq,δ with scaling exponent h(δ), this paper analyzes the complexities of simulation series and classifies several financial markets in various regions of the world effectively.
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
A Scalable Distributed Syntactic, Semantic, and Lexical Language Model
2012-09-01
Here pa(τ) denotes the set of parent states of τ. If the recursive factorization refers to a graph , then we have a Bayesian network (Lauritzen 1996...Broadly speaking, however, the recursive factorization can refer to a representation more complicated than a graph with a fixed set of nodes and edges...factored language (FL) model (Bilmes and Kirchhoff 2003) is close to the smoothing technique we propose here, the major difference is that FL
A recursive algorithm for Zernike polynomials
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.
A note on NMHV form factors from the Graßmannian and the twistor string
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meidinger, David; Nandan, Dhritiman; Penante, Brenda
In this note we investigate Graßmannian formulas for form factors of the chiral part of the stress-tensor multiplet in N = 4 superconformal Yang-Mills theory. We present an all-n contour for the G(3, n + 2) Graßmannian integral of NMHV form factors derived from on-shell diagrams and the BCFW recursion relation. In addition, we study other G(3, n + 2) formulas obtained from the connected prescription introduced recently. We find a recursive expression for all n and study its properties. For n ≥ 6, our formula has the same recursive structure as its amplitude counterpart, making its soft behaviour manifest.more » Finally, we explore the connection between the two Graßmannian formulations, using the global residue theorem, and find that it is much more intricate compared to scattering amplitudes.« less
A recursive vesicle-based model protocell with a primitive model cell cycle
NASA Astrophysics Data System (ADS)
Kurihara, Kensuke; Okura, Yusaku; Matsuo, Muneyuki; Toyota, Taro; Suzuki, Kentaro; Sugawara, Tadashi
2015-09-01
Self-organized lipid structures (protocells) have been proposed as an intermediate between nonliving material and cellular life. Synthetic production of model protocells can demonstrate the potential processes by which living cells first arose. While we have previously described a giant vesicle (GV)-based model protocell in which amplification of DNA was linked to self-reproduction, the ability of a protocell to recursively self-proliferate for multiple generations has not been demonstrated. Here we show that newborn daughter GVs can be restored to the status of their parental GVs by pH-induced vesicular fusion of daughter GVs with conveyer GVs filled with depleted substrates. We describe a primitive model cell cycle comprising four discrete phases (ingestion, replication, maturity and division), each of which is selectively activated by a specific external stimulus. The production of recursive self-proliferating model protocells represents a step towards eventual production of model protocells that are able to mimic evolution.
The TAR effect: when the ones who dislike become the ones who are disliked.
Gawronski, Bertram; Walther, Eva
2008-09-01
Four studies tested whether a source's evaluations of other individuals can recursively transfer to the source, such that people who like others acquire a positive valence, whereas people who dislike others acquire a negative valence (Transfer of Attitudes Recursively; TAR). Experiment 1 provides first evidence for TAR effects, showing recursive transfers of evaluations regardless of whether participants did or did not have prior knowledge about the (dis)liking source. Experiment 2 shows that previously but not subsequently acquired knowledge about targets that were (dis)liked by a source overrode TAR effects in a manner consistent with cognitive balance. Finally, Experiments 3 and 4 demonstrate that TAR effects are mediated by higher order propositional inferences (in contrast to lower order associative processes), in that TAR effects on implicit attitude measures were fully mediated by TAR effects on explicit attitude measures. Commonalities and differences between the TAR effect and previously established phenomena are discussed.
Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach
NASA Astrophysics Data System (ADS)
Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.
2011-03-01
We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
A note on NMHV form factors from the Graßmannian and the twistor string
Meidinger, David; Nandan, Dhritiman; Penante, Brenda; ...
2017-09-06
In this note we investigate Graßmannian formulas for form factors of the chiral part of the stress-tensor multiplet in N = 4 superconformal Yang-Mills theory. We present an all-n contour for the G(3, n + 2) Graßmannian integral of NMHV form factors derived from on-shell diagrams and the BCFW recursion relation. In addition, we study other G(3, n + 2) formulas obtained from the connected prescription introduced recently. We find a recursive expression for all n and study its properties. For n ≥ 6, our formula has the same recursive structure as its amplitude counterpart, making its soft behaviour manifest.more » Finally, we explore the connection between the two Graßmannian formulations, using the global residue theorem, and find that it is much more intricate compared to scattering amplitudes.« less
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
EEG and MEG source localization using recursively applied (RAP) MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less
Rehbein, Pia; Brügemann, Kerstin; Yin, Tong; V Borstel, U König; Wu, Xiao-Lin; König, Sven
2013-10-01
A dataset of test-day records, fertility traits, and one health trait including 1275 Brown Swiss cows kept in 46 small-scale organic farms was used to infer relationships among these traits based on recursive Gaussian-threshold models. Test-day records included milk yield (MY), protein percentage (PROT-%), fat percentage (FAT-%), somatic cell score (SCS), the ratio of FAT-% to PROT-% (FPR), lactose percentage (LAC-%), and milk urea nitrogen (MUN). Female fertility traits were defined as the interval from calving to first insemination (CTFS) and success of a first insemination (SFI), and the health trait was clinical mastitis (CM). First, a tri-trait model was used which postulated the recursive effect of a test-day observation in the early period of lactation on liability to CM (LCM), and further the recursive effect of LCM on the following test-day observation. For CM and female fertility traits, a bi-trait recursive Gaussian-threshold model was employed to estimate the effects from CM to CTFS and from CM on SFI. The recursive effects from CTFS and SFI onto CM were not relevant, because CM was recorded prior to the measurements for CTFS and SFI. Results show that the posterior heritability for LCM was 0.05, and for all other traits, heritability estimates were in reasonable ranges, each with a small posterior SD. Lowest heritability estimates were obtained for female reproduction traits, i.e. h(2)=0.02 for SFI, and h(2)≈0 for CTFS. Posterior estimates of genetic correlations between LCM and production traits (MY and MUN), and between LCM and somatic cell score (SCS), were large and positive (0.56-0.68). Results confirm the genetic antagonism between MY and LCM, and the suitability of SCS as an indicator trait for CM. Structural equation coefficients describe the impact of one trait on a second trait on the phenotypic pathway. Higher values for FAT-% and FPR were associated with a higher LCM. The rate of change in FAT-% and in FPR in the ongoing lactation with respect to the previous LCM was close to zero. Estimated recursive effects between SCS and CM were positive, implying strong phenotypic impacts between both traits. Structural equation coefficients explained a detrimental impact of CM on female fertility traits CTFS and SFI. The cow-specific CM treatment had no significant impact on performance traits in the ongoing lactation. For most treatments, beta-lactam-antibiotics were used, but test-day SCS and production traits after the beta-lactam-treatment were comparable to those after other antibiotic as well as homeopathic treatments. Copyright © 2013 Elsevier B.V. All rights reserved.
Microcanonical entropy for classical systems
NASA Astrophysics Data System (ADS)
Franzosi, Roberto
2018-03-01
The entropy definition in the microcanonical ensemble is revisited. We propose a novel definition for the microcanonical entropy that resolve the debate on the correct definition of the microcanonical entropy. In particular we show that this entropy definition fixes the problem inherent the exact extensivity of the caloric equation. Furthermore, this entropy reproduces results which are in agreement with the ones predicted with standard Boltzmann entropy when applied to macroscopic systems. On the contrary, the predictions obtained with the standard Boltzmann entropy and with the entropy we propose, are different for small system sizes. Thus, we conclude that the Boltzmann entropy provides a correct description for macroscopic systems whereas extremely small systems should be better described with the entropy that we propose here.
On S-mixing entropy of quantum channels
NASA Astrophysics Data System (ADS)
Mukhamedov, Farrukh; Watanabe, Noboru
2018-06-01
In this paper, an S-mixing entropy of quantum channels is introduced as a generalization of Ohya's S-mixing entropy. We investigate several properties of the introduced entropy. Moreover, certain relations between the S-mixing entropy and the existing map and output entropies of quantum channels are investigated as well. These relations allowed us to find certain connections between separable states and the introduced entropy. Hence, there is a sufficient condition to detect entangled states. Moreover, several properties of the introduced entropy are investigated. Besides, entropies of qubit and phase-damping channels are calculated.
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
Moderate deviations-based importance sampling for stochastic recursive equations
Dupuis, Paul; Johnson, Dane
2017-11-17
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Dissociative Electron Attachment to Rovibrationally Excited Molecules
1987-08-31
obtained in some recent papers.4’ - In Sec. IV of the present L,(0, (00 paper we will obtain some general recursion relations among where these matrix... general five-term From the generating function of Hermite polynomials , recursion relation (32) is obtained which is valid for the matrix elements of...for the generation of the functions for increasing 1. One convenient way to evaluate a Q, function is to write it in terms of Gaussian hypergeometric
Moderate deviations-based importance sampling for stochastic recursive equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Paul; Johnson, Dane
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
NASA Technical Reports Server (NTRS)
Nikravesh, Parviz E.; Gim, Gwanghum; Arabyan, Ara; Rein, Udo
1989-01-01
The formulation of a method known as the joint coordinate method for automatic generation of the equations of motion for multibody systems is summarized. For systems containing open or closed kinematic loops, the equations of motion can be reduced systematically to a minimum number of second order differential equations. The application of recursive and nonrecursive algorithms to this formulation, computational considerations and the feasibility of implementing this formulation on multiprocessor computers are discussed.
Quantile based Tsallis entropy in residual lifetime
NASA Astrophysics Data System (ADS)
Khammar, A. H.; Jahanshahi, S. M. A.
2018-02-01
Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.
Recursive regularization step for high-order lattice Boltzmann methods
NASA Astrophysics Data System (ADS)
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
Recursive-operator method in vibration problems for rod systems
NASA Astrophysics Data System (ADS)
Rozhkova, E. V.
2009-12-01
Using linear differential equations with constant coefficients describing one-dimensional dynamical processes as an example, we show that the solutions of these equations and systems are related to the solution of the corresponding numerical recursion relations and one does not have to compute the roots of the corresponding characteristic equations. The arbitrary functions occurring in the general solution of the homogeneous equations are determined by the initial and boundary conditions or are chosen from various classes of analytic functions. The solutions of the inhomogeneous equations are constructed in the form of integro-differential series acting on the right-hand side of the equation, and the coefficients of the series are determined from the same recursion relations. The convergence of formal solutions as series of a more general recursive-operator construction was proved in [1]. In the special case where the solutions of the equation can be represented in separated variables, the power series can be effectively summed, i.e., expressed in terms of elementary functions, and coincide with the known solutions. In this case, to determine the natural vibration frequencies, one obtains algebraic rather than transcendental equations, which permits exactly determining the imaginary and complex roots of these equations without using the graphic method [2, pp. 448-449]. The correctness of the obtained formulas (differentiation formulas, explicit expressions for the series coefficients, etc.) can be verified directly by appropriate substitutions; therefore, we do not prove them here.
Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker-Jarvis, James
This paper is a study of entropy and its evolution in the time and frequency domains upon application of electromagnetic fields to materials. An understanding of entropy and its evolution in electromagnetic interactions bridges the boundaries between electromagnetism and thermodynamics. The approach used here is a Liouville-based statistical-mechanical theory. I show that the microscopic entropy is reversible and the macroscopic entropy satisfies an H theorem. The spectral entropy development can be very useful for studying the frequency response of materials. Using a projection-operator based nonequilibrium entropy, different equations are derived for the entropy and entropy production and are applied tomore » the polarization, magnetization, and macroscopic fields. I begin by proving an exact H theorem for the entropy, progress to application of time-dependent entropy in electromagnetics, and then apply the theory to relevant applications in electromagnetics. The paper concludes with a discussion of the relationship of the frequency-domain form of the entropy to the permittivity, permeability, and impedance.« less
Entropy flow and entropy production in the human body in basal conditions.
Aoki, I
1989-11-08
Entropy inflow and outflow for the naked human body in basal conditions in the respiration calorimeter due to infrared radiation, convection, evaporation of water and mass-flow are calculated by use of the energetic data obtained by Hardy & Du Bois. Also, the change of entropy content in the body is estimated. The entropy production in the human body is obtained as the change of entropy content minus the net entropy flow into the body. The entropy production thus calculated becomes positive. The magnitude of entropy production per effective radiating surface area does not show any significant variation with subjects. The entropy production is nearly constant at the calorimeter temperatures of 26-32 degrees C; the average in this temperature range is 0.172 J m-2 sec-1 K-1. The forced air currents around the human body and also clothing have almost no effect in changing the entropy production. Thus, the entropy production of the naked human body in basal conditions does not depend on its environmental factors.
Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando
2018-01-01
Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203
Fragomeni, B O; Lourenco, D A L; Tsuruta, S; Masuda, Y; Aguilar, I; Misztal, I
2015-10-01
The purpose of this study was to examine accuracy of genomic selection via single-step genomic BLUP (ssGBLUP) when the direct inverse of the genomic relationship matrix (G) is replaced by an approximation of G(-1) based on recursions for young genotyped animals conditioned on a subset of proven animals, termed algorithm for proven and young animals (APY). With the efficient implementation, this algorithm has a cubic cost with proven animals and linear with young animals. Ten duplicate data sets mimicking a dairy cattle population were simulated. In a first scenario, genomic information for 20k genotyped bulls, divided in 7k proven and 13k young bulls, was generated for each replicate. In a second scenario, 5k genotyped cows with phenotypes were included in the analysis as young animals. Accuracies (average for the 10 replicates) in regular EBV were 0.72 and 0.34 for proven and young animals, respectively. When genomic information was included, they increased to 0.75 and 0.50. No differences between genomic EBV (GEBV) obtained with the regular G(-1) and the approximated G(-1) via the recursive method were observed. In the second scenario, accuracies in GEBV (0.76, 0.51 and 0.59 for proven bulls, young males and young females, respectively) were also higher than those in EBV (0.72, 0.35 and 0.49). Again, no differences between GEBV with regular G(-1) and with recursions were observed. With the recursive algorithm, the number of iterations to achieve convergence was reduced from 227 to 206 in the first scenario and from 232 to 209 in the second scenario. Cows can be treated as young animals in APY without reducing the accuracy. The proposed algorithm can be implemented to reduce computing costs and to overcome current limitations on the number of genotyped animals in the ssGBLUP method. © 2015 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Generating highly accurate prediction hypotheses through collaborative ensemble learning
NASA Astrophysics Data System (ADS)
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-03-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.
Kleinhans, Sonja; Herrmann, Eva; Kohnen, Thomas; Bühren, Jens
2017-08-15
Background Iatrogenic keratectasia is one of the most dreaded complications of refractive surgery. In most cases, keratectasia develops after refractive surgery of eyes suffering from subclinical stages of keratoconus with few or no signs. Unfortunately, there has been no reliable procedure for the early detection of keratoconus. In this study, we used binary decision trees (recursive partitioning) to assess their suitability for discrimination between normal eyes and eyes with subclinical keratoconus. Patients and Methods The method of decision tree analysis was compared with discriminant analysis which has shown good results in previous studies. Input data were 32 eyes of 32 patients with newly diagnosed keratoconus in the contralateral eye and preoperative data of 10 eyes of 5 patients with keratectasia after laser in-situ keratomileusis (LASIK). The control group was made up of 245 normal eyes after LASIK and 12-month follow-up without any signs of iatrogenic keratectasia. Results Decision trees gave better accuracy and specificity than did discriminant analysis. The sensitivity of decision trees was lower than the sensitivity of discriminant analysis. Conclusion On the basis of the patient population of this study, decision trees did not prove to be superior to linear discriminant analysis for the detection of subclinical keratoconus. Georg Thieme Verlag KG Stuttgart · New York.
Classical space-times from the S-matrix
NASA Astrophysics Data System (ADS)
Neill, Duff; Rothstein, Ira Z.
2013-12-01
We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Liu, Zhigang; Han, Zhiwei; Zhang, Yang; Zhang, Qiaoge
2014-11-01
Multiwavelets possess better properties than traditional wavelets. Multiwavelet packet transformation has more high-frequency information. Spectral entropy can be applied as an analysis index to the complexity or uncertainty of a signal. This paper tries to define four multiwavelet packet entropies to extract the features of different transmission line faults, and uses a radial basis function (RBF) neural network to recognize and classify 10 fault types of power transmission lines. First, the preprocessing and postprocessing problems of multiwavelets are presented. Shannon entropy and Tsallis entropy are introduced, and their difference is discussed. Second, multiwavelet packet energy entropy, time entropy, Shannon singular entropy, and Tsallis singular entropy are defined as the feature extraction methods of transmission line fault signals. Third, the plan of transmission line fault recognition using multiwavelet packet entropies and an RBF neural network is proposed. Finally, the experimental results show that the plan with the four multiwavelet packet energy entropies defined in this paper achieves better performance in fault recognition. The performance with SA4 (symmetric antisymmetric) multiwavelet packet Tsallis singular entropy is the best among the combinations of different multiwavelet packets and the four multiwavelet packet entropies.
Quantification of fetal heart rate regularity using symbolic dynamics
NASA Astrophysics Data System (ADS)
van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.
2007-03-01
Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to examine the cardiovascular system may thus lead to new insight with respect to fetal development.
Structural modeling of carbonaceous mesophase amphotropic mixtures under uniaxial extensional flow.
Golmohammadi, Mojdeh; Rey, Alejandro D
2010-07-21
The extended Maier-Saupe model for binary mixtures of model carbonaceous mesophases (uniaxial discotic nematogens) under externally imposed flow, formulated in previous studies [M. Golmohammadi and A. D. Rey, Liquid Crystals 36, 75 (2009); M. Golmohammadi and A. D. Rey, Entropy 10, 183 (2008)], is used to characterize the effect of uniaxial extensional flow and concentration on phase behavior and structure of these mesogenic blends. The generic thermorheological phase diagram of the single-phase binary mixture, given in terms of temperature (T) and Deborah (De) number, shows the existence of four T-De transition lines that define regions that correspond to the following quadrupolar tensor order parameter structures: (i) oblate (perpendicular, parallel), (ii) prolate (perpendicular, parallel), (iii) scalene O(perpendicular, parallel), and (iv) scalene P(perpendicular, parallel), where the symbols (perpendicular, parallel) indicate alignment of the tensor order ellipsoid with respect to the extension axis. It is found that with increasing T the dominant component of the mixture exhibits weak deviations from the well-known pure species response to uniaxial extensional flow (uniaxial perpendicular nematic-->biaxial nematic-->uniaxial parallel paranematic). In contrast, the slaved component shows a strong deviation from the pure species response. This deviation is dictated by the asymmetric viscoelastic coupling effects emanating from the dominant component. Changes in conformation (oblate <==> prolate) and orientation (perpendicular <==> parallel) are effected through changes in pairs of eigenvalues of the quadrupolar tensor order parameter. The complexity of the structural sensitivity to temperature and extensional flow is a reflection of the dual lyotropic/thermotropic nature (amphotropic nature) of the mixture and their cooperation/competition. The analysis demonstrates that the simple structures (biaxial nematic and uniaxial paranematic) observed in pure discotic mesogens under uniaxial extensional flow are significantly enriched by the interaction of the lyotropic/thermotropic competition with the binary molecular architectures and with the quadrupolar nature of the flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Y; Zou, J; Murillo, P
Purpose: Chemo-radiation therapy (CRT) is widely used in treating patients with locally advanced non-small cell lung cancer (NSCLC). Determination of the likelihood of patient response to treatment and optimization of treatment regime is of clinical significance. Up to date, no imaging biomarker has reliably correlated to NSCLC patient survival rate. This pilot study is to extract CT texture information from tumor regions for patient survival prediction. Methods: Thirteen patients with stage II-III NSCLC were treated using CRT with a median dose of 6210 cGy. Non-contrast-enhanced CT images were acquired for treatment planning and retrospectively collected for this study. Texture analysismore » was applied in segmented tumor regions using the Local Binary Pattern method (LBP). By comparing its HU with neighboring voxels, the LBPs of a voxel were measured in multiple scales with different group radiuses and numbers of neighbors. The LBP histograms formed a multi-dimensional texture vector for each patient, which was then used to establish and test a Support Vector Machine (SVM) model to predict patients’ one year survival. The leave-one-out cross validation strategy was used recursively to enlarge the training set and derive a reliable predictor. The predictions were compared with the true clinical outcomes. Results: A 10-dimensional LBP histogram was extracted from 3D segmented tumor region for each of the 13 patients. Using the SVM model with the leave-one-out strategy, only 1 out of 13 patients was misclassified. The experiments showed an accuracy of 93%, sensitivity of 100%, and specificity of 86%. Conclusion: Within the framework of a Support Vector Machine based model, the Local Binary Pattern method is able to extract a quantitative imaging biomarker in the prediction of NSCLC patient survival. More patients are to be included in the study.« less
Uniqueness and characterization theorems for generalized entropies
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Tempesta, Piergiulio
2017-12-01
The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
Adaptive Control and Parameter Identification of a Doubly-Fed Induction Generator for Wind Power
2011-09-01
Computer Controlled Systems, Theory and Design, Third Edition, Prentice Hall, New Jersey, 1997. [27] R. G. Brown and P. Y.C. Hwang , Introduction to...V n y iT iT , (0.0) with Ts as the sampling interval. From [26], the recursive estimate can be interpreted as a Kalman Filter for the process...by substituting t with n. The recursive equations for the RLS can then be derived from the Kalman filter equations used in [27]: 29 $ $ $ 1 1
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Efficient method for computing the electronic transport properties of a multiterminal system
NASA Astrophysics Data System (ADS)
Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio
2018-04-01
We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.
Corona graphs as a model of small-world networks
NASA Astrophysics Data System (ADS)
Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi
2015-11-01
We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.
Event Compression Using Recursive Least Squares Signal Processing.
1980-07-01
decimation of the Burstl signal with and without all-pole prefiltering to reduce aliasing . Figures 3.32a-c and 3.33a-c show the same examples but with 4/1...to reduce aliasing , w~t found that it did not improve the quality of the event compressed signals . If filtering must be performed, all-pole filtering...A-AO89 785 MASSACHUSETTS IN T OF TECH CAMBRIDGE RESEARCH LAB OF--ETC F/B 17/9 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES SIGNAL PROCESSI-ETC(t
The Lehmer Matrix and Its Recursive Analogue
2010-01-01
LU factorization of matrix A by considering det A = det U = ∏n i=1 2i−1 i2 . The nth Catalan number is given in terms of binomial coefficients by Cn...for failing to comply with a collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 2010 2. REPORT...TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Lehmer matrix and its recursive analogue 5a. CONTRACT NUMBER 5b
NASA Astrophysics Data System (ADS)
Schliesser, Jacob M.; Huang, Baiyu; Sahu, Sulata K.; Asplund, Megan; Navrotsky, Alexandra; Woodfield, Brian F.
2018-03-01
We have measured the heat capacities of several well-characterized bulk and nanophase Fe3O4-Co3O4 and Fe3O4-Mn3O4 spinel solid solution samples from which magnetic properties of transitions and third-law entropies have been determined. The magnetic transitions show several features common to effects of particle and magnetic domain sizes. From the standard molar entropies, excess entropies of mixing have been generated for these solid solutions and compared with configurational entropies determined previously by assuming appropriate cation and valence distributions. The vibrational and magnetic excess entropies for bulk materials are comparable in magnitude to the respective configurational entropies indicating that excess entropies of mixing must be included when analyzing entropies of mixing. The excess entropies for nanophase materials are even larger than the configurational entropies. Changes in valence, cation distribution, bonding and microstructure between the mixing ions are the likely sources of the positive excess entropies of mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ribeiro, T.; Baptista, R.; Kafka, S.
We present a multi-epoch time-resolved high-resolution optical spectroscopy study of the short-period (P{sub orb} = 11.2 hr) eclipsing M0V+M5V RS CVn binary V405 Andromeda. By means of indirect imaging techniques, namely Doppler imaging, we study the surface activity features of the M0V component of the system. A modified version of a Doppler imaging code, which takes into account the tidal distortion of the surface of the star, is applied to the multi-epoch data set in order to provide indirect images of the stellar surface. The multi-epoch surface brightness distributions show a low intensity 'belt' of spots at latitudes {+-}40{sup 0}more » and a noticeable absence of high latitude features or polar spots on the primary star of V405 Andromeda. They also reveal slow evolution of the spot distribution over {approx}4 yr. An entropy landscape procedure is used in order to find the set of binary parameters that lead to the smoothest surface brightness distributions. As a result, we find M{sub 1} = 0.51 {+-} 0.03 M{sub sun}, M{sub 2} = 0.21 {+-} 0.01 M{sub sun}, R{sub 1} = 0.71 {+-} 0.01 R{sub sun}, and an inclination i = 65{sup 0} {+-} 1{sup 0}. The resulting systemic velocity is distinct for different epochs, raising the possibility of the existence of a third body in the system.« less
Naima, Z; Siro, T; Juan-Manuel, G D; Chantal, C; René, C; Jerome, D
2001-02-01
The influence of a hydrophilic carrier (PEG 6000) on the polymorphism of carbamazepine, an antiepileptic drug, was investigated in binary physical mixtures and solid dispersions by means of differential scanning calorimetry (DSC), thermal gravimetry, hot-stage microscopy (HSM), and X-ray diffractometry, respectively. This study provides also an attempt to develop a method to calculate more precisely the eutectic composition. In rather ideal physical mixtures, carbamazepine was found as monoclinic Form III. In solid dispersions, the drug was found to crystallize as trigonal Form II; a eutectic invariant in the PEG 6000-rich composition domain (6% of carbamazepine mass) was evidenced by DSC experiments and confirmed by HSM observations. In the binary phase diagram the ideal carbamazepine liquidus curve was located at temperatures higher than the respective experimental ones. This suggests that drug can be maintained in the liquid state in the temperature-mass fraction (T--x) region between the two carbamazepine liquidus curves. This indicates in turn that attractive interactions occur between carbamazepine and PEG 6000-chains. These interactions have been also claimed to prevent carbamazepine from degradation into iminostilbene (a compound resulting from the chemical degradation of carbamazepine which is postulated to be responsible for the idiosyncratic toxicity of the drug) and thought to lead to the crystallization of metastable Carbamazepine II from melt. The negative excess entropy for eutectic mixtures indicated that the drug crystals are finely dispersed in the bulk of polymer chains.
On the definition of a Monte Carlo model for binary crystal growth.
Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E
2007-02-01
We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.
Trotta-Moreu, Nuria; Lobo, Jorge M
2010-02-01
Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
On the entropy variation in the scenario of entropic gravity
NASA Astrophysics Data System (ADS)
Xiao, Yong; Bai, Shi-Yang
2018-05-01
In the scenario of entropic gravity, entropy varies as a function of the location of the matter, while the tendency to increase entropy appears as gravity. We concentrate on studying the entropy variation of a typical gravitational system with different relative positions between the mass and the gravitational source. The result is that the entropy of the system doesn't increase when the mass is displaced closer to the gravitational source. In this way it disproves the proposal of entropic gravity from thermodynamic entropy. It doesn't exclude the possibility that gravity originates from non-thermodynamic entropy like entanglement entropy.
Entropy and climate. I - ERBE observations of the entropy production of the earth
NASA Technical Reports Server (NTRS)
Stephens, G. L.; O'Brien, D. M.
1993-01-01
An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.
Logarithmic black hole entropy corrections and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Mahapatra, Subhash
2018-01-01
The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.
Thermodynamic properties of tungsten
NASA Astrophysics Data System (ADS)
Grimvall, Göran; Thiessen, Maria; Guillermet, Armando Fernández
1987-11-01
Tungsten has several unusual thermodynamic properties, e.g., very high values of the melting point, the entropy of fusion, the expansion on melting and the lattice anharmonicity. These features are given a semiquantitative explanation, based on the electron density of states N(E). Our treatment includes a numerical calculation of the electronic heat capacity from N(E) and a calculation of the entropy Debye temperature FTHETAS(T) from the vibrational part of the experimental heat capacity. FTHETAS(T) decreases by 36% from 300 K to the melting temperature 3695 K, the largest drop in FTHETAS for elemental metals. Recent quantum-mechanical ab initio calculations of the difference, Hβ/α, in Gibbs energy at T=0 K between the metastable fcc tungsten and the stable bcc phase yield Hβ/α=50+/-5 kJ/mol, which is much larger than the ``experimental'' values Hβ/α=10 and 19 kJ/mol derived from previous semiempirical analyses [the so-called calculation of phase diagrams (``CALPHAD'') method] of binary phase diagrams containing tungsten. We have reanalyzed CALPHAD data, using the results of the first part of this paper. Because of the shapes of N(E) of α-W and β-W, some usually acceptable CALPHAD procedures give misleading results. We give several estimates of Hβ/α, using different assumptions about the hypothetical melting temperature Tβf of fcc W. The more realistic of our estimates gives Hβ/α=30 kJ/mol or larger, thus reducing considerably the previous discrepancy between CALPHAD and ab initio results. The physical picture emerging from this work should be of importance in refinements of the CALPHAD method.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie
2017-04-27
Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
Effect of entropy on anomalous transport in ITG-modes of magneto-plasma
NASA Astrophysics Data System (ADS)
Yaqub Khan, M.; Qaiser Manzoor, M.; Haq, A. ul; Iqbal, J.
2017-04-01
The ideal gas equation and S={{c}v}log ≤ft(P/ρ \\right) (where S is entropy, P is pressure and ρ is the mass density) define the interconnection of entropy with the temperature and density of plasma. Therefore, different phenomena relating to plasma and entropy need to be investigated. By employing the Braginskii transport equations for a nonuniform electron-ion magnetoplasma, two new parameters—the entropy distribution function and the entropy gradient drift—are defined, a new dispersion relation is obtained, and the dependence of anomalous transport on entropy is also proved. Some results, like monotonicity, the entropy principle and the second law of thermodynamics, are proved with a new definition of entropy. This work will open new horizons in fusion processes, not only by controlling entropy in tokamak plasmas—particularly in the pedestal regions of the H-mode and space plasmas—but also in engineering sciences.
Quantifying and minimizing entropy generation in AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1997-12-31
Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Parsing recursive sentences with a connectionist model including a neural stack and synaptic gating.
Fedor, Anna; Ittzés, Péter; Szathmáry, Eörs
2011-02-21
It is supposed that humans are genetically predisposed to be able to recognize sequences of context-free grammars with centre-embedded recursion while other primates are restricted to the recognition of finite state grammars with tail-recursion. Our aim was to construct a minimalist neural network that is able to parse artificial sentences of both grammars in an efficient way without using the biologically unrealistic backpropagation algorithm. The core of this network is a neural stack-like memory where the push and pop operations are regulated by synaptic gating on the connections between the layers of the stack. The network correctly categorizes novel sentences of both grammars after training. We suggest that the introduction of the neural stack memory will turn out to be substantial for any biological 'hierarchical processor' and the minimalist design of the model suggests a quest for similar, realistic neural architectures. Copyright © 2010 Elsevier Ltd. All rights reserved.
A recursive vesicle-based model protocell with a primitive model cell cycle
Kurihara, Kensuke; Okura, Yusaku; Matsuo, Muneyuki; Toyota, Taro; Suzuki, Kentaro; Sugawara, Tadashi
2015-01-01
Self-organized lipid structures (protocells) have been proposed as an intermediate between nonliving material and cellular life. Synthetic production of model protocells can demonstrate the potential processes by which living cells first arose. While we have previously described a giant vesicle (GV)-based model protocell in which amplification of DNA was linked to self-reproduction, the ability of a protocell to recursively self-proliferate for multiple generations has not been demonstrated. Here we show that newborn daughter GVs can be restored to the status of their parental GVs by pH-induced vesicular fusion of daughter GVs with conveyer GVs filled with depleted substrates. We describe a primitive model cell cycle comprising four discrete phases (ingestion, replication, maturity and division), each of which is selectively activated by a specific external stimulus. The production of recursive self-proliferating model protocells represents a step towards eventual production of model protocells that are able to mimic evolution. PMID:26418735
Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.
2013-01-01
RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432
Recursion Relations for Double Ramification Hierarchies
NASA Astrophysics Data System (ADS)
Buryak, Alexandr; Rossi, Paolo
2016-03-01
In this paper we study various properties of the double ramification hierarchy, an integrable hierarchy of hamiltonian PDEs introduced in Buryak (CommunMath Phys 336(3):1085-1107, 2015) using intersection theory of the double ramification cycle in the moduli space of stable curves. In particular, we prove a recursion formula that recovers the full hierarchy starting from just one of the Hamiltonians, the one associated to the first descendant of the unit of a cohomological field theory. Moreover, we introduce analogues of the topological recursion relations and the divisor equation both for the Hamiltonian densities and for the string solution of the double ramification hierarchy. This machinery is very efficient and we apply it to various computations for the trivial and Hodge cohomological field theories, and for the r -spin Witten's classes. Moreover, we prove the Miura equivalence between the double ramification hierarchy and the Dubrovin-Zhang hierarchy for the Gromov-Witten theory of the complex projective line (extended Toda hierarchy).
A probabilistic, distributed, recursive mechanism for decision-making in the brain
Gurney, Kevin N.
2018-01-01
Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077
NASA Technical Reports Server (NTRS)
Shareef, N. H.; Amirouche, F. M. L.
1991-01-01
A computational algorithmic procedure is developed and implemented for the dynamic analysis of a multibody system with rigid/flexible interconnected bodies. The algorithm takes into consideration the large rotation/translation and small elastic deformations associated with the rigid-body degrees of freedom and the flexibility of the bodies in the system respectively. Versatile three-dimensional isoparametric brick elements are employed for the modeling of the geometric configurations of the bodies. The formulation of the recursive dynamical equations of motion is based on the recursive Kane's equations, strain energy concepts, and the techniques of component mode synthesis. In order to minimize CPU-intensive matrix multiplication operations and speed up the execution process, the concepts of indexed arrays is utilized in the formulation of the equations of motion. A spin-up maneuver of a space robot with three flexible links carrying a solar panel is used as an illustrative example.
Face recognition using tridiagonal matrix enhanced multivariance products representation
NASA Astrophysics Data System (ADS)
Ã-zay, Evrim Korkmaz
2017-01-01
This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.
WKB solutions of difference equations and reconstruction by the topological recursion
NASA Astrophysics Data System (ADS)
Marchal, Olivier
2018-01-01
The purpose of this article is to analyze the connection between Eynard-Orantin topological recursion and formal WKB solutions of a \\hbar -difference equation: \\Psi(x+\\hbar)=≤ft(e\\hbar\\fracd{dx}\\right) \\Psi(x)=L(x;\\hbar)\\Psi(x) with L(x;\\hbar)\\in GL_2( ({C}(x))[\\hbar]) . In particular, we extend the notion of determinantal formulas and topological type property proposed for formal WKB solutions of \\hbar -differential systems to this setting. We apply our results to a specific \\hbar -difference system associated to the quantum curve of the Gromov-Witten invariants of {P}1 for which we are able to prove that the correlation functions are reconstructed from the Eynard-Orantin differentials computed from the topological recursion applied to the spectral curve y=\\cosh-1\\frac{x}{2} . Finally, identifying the large x expansion of the correlation functions, proves a recent conjecture made by Dubrovin and Yang regarding a new generating series for Gromov-Witten invariants of {P}1 .
Three applications of a bonus relation for gravity amplitudes
NASA Astrophysics Data System (ADS)
Spradlin, Marcus; Volovich, Anastasia; Wen, Congkao
2009-04-01
Arkani-Hamed et al. have recently shown that all tree-level scattering amplitudes in maximal supergravity exhibit exceptionally soft behavior when two supermomenta are taken to infinity in a particular complex direction, and that this behavior implies new non-trivial relations amongst amplitudes in addition to the well-known on-shell recursion relations. We consider the application of these new 'bonus relations' to MHV amplitudes, showing that they can be used quite generally to relate (n - 2) !-term formulas typically obtained from recursion relations to (n - 3) !-term formulas related to the original BGK conjecture. Specifically we provide (1) a direct proof of a formula presented by Elvang and Freedman, (2) a new formula based on one due to Bedford et al., and (3) an alternate proof of a formula recently obtained by Mason and Skinner. Our results also provide the first direct proof that the conjectured BGK formula, only very recently proven via completely different methods, satisfies the on-shell recursion.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Statistical learning and the challenge of syntax: Beyond finite state automata
NASA Astrophysics Data System (ADS)
Elman, Jeff
2003-10-01
Over the past decade, it has been clear that even very young infants are sensitive to the statistical structure of language input presented to them, and use the distributional regularities to induce simple grammars. But can such statistically-driven learning also explain the acquisition of more complex grammar, particularly when the grammar includes recursion? Recent claims (e.g., Hauser, Chomsky, and Fitch, 2002) have suggested that the answer is no, and that at least recursion must be an innate capacity of the human language acquisition device. In this talk evidence will be presented that indicates that, in fact, statistically-driven learning (embodied in recurrent neural networks) can indeed enable the learning of complex grammatical patterns, including those that involve recursion. When the results are generalized to idealized machines, it is found that the networks are at least equivalent to Push Down Automata. Perhaps more interestingly, with limited and finite resources (such as are presumed to exist in the human brain) these systems demonstrate patterns of performance that resemble those in humans.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
NASA Astrophysics Data System (ADS)
Chair, Noureddine
2014-02-01
We have recently developed methods for obtaining exact two-point resistance of the complete graph minus N edges. We use these methods to obtain closed formulas of certain trigonometrical sums that arise in connection with one-dimensional lattice, in proving Scott's conjecture on permanent of Cauchy matrix, and in the perturbative chiral Potts model. The generalized trigonometrical sums of the chiral Potts model are shown to satisfy recursion formulas that are transparent and direct, and differ from those of Gervois and Mehta. By making a change of variables in these recursion formulas, the dimension of the space of conformal blocks of SU(2) and SO(3) WZW models may be computed recursively. Our methods are then extended to compute the corner-to-corner resistance, and the Kirchhoff index of the first non-trivial two-dimensional resistor network, 2×N. Finally, we obtain new closed formulas for variant of trigonometrical sums, some of which appear in connection with number theory.