Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
ERIC Educational Resources Information Center
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Arithmetic operations in optical computations using a modified trinary number system.
Datta, A K; Basuray, A; Mukhopadhyay, S
1989-05-01
A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Algorithm XXX : functions to support the IEEE standard for binary floating-point arithmetic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cody, W. J.; Mathematics and Computer Science
1993-12-01
This paper describes C programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the Appendix to the IEEE Standard for Binary Floating-Point Arithmetic. In the case of logb, the modified definition given in the later IEEE Standard for Radix-Independent Floating-Point Arithmetic is followed. These programs should run without modification on most systems conforming to the binary standard.
Redundant binary number representation for an inherently parallel arithmetic on optical computers.
De Biase, G A; Massini, A
1993-02-10
A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.
Two-bit trinary full adder design based on restricted signed-digit numbers
NASA Astrophysics Data System (ADS)
Ahmed, J. U.; Awwal, A. A. S.; Karim, M. A.
1994-08-01
A 2-bit trinary full adder using a restricted set of a modified signed-digit trinary numeric system is designed. When cascaded together to design a multi-bit adder machine, the resulting system is able to operate at a speed independent of the size of the operands. An optical non-holographic content addressable memory based on binary coded arithmetic is considered for implementing the proposed adder.
Extreme D'Hondt and round-off effects in voting computations
NASA Astrophysics Data System (ADS)
Konstantinov, M. M.; Pelova, G. B.
2015-11-01
D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
Trinary optical logic processors using shadow casting with polarized light
NASA Astrophysics Data System (ADS)
Ghosh, Amal K.; Basuray, A.
1990-10-01
An optical implementation is proposed of the modified trinary number (MTN) system (Datta et al., 1989) in which any binary number can have arithmetic operations performed on it in parallel without the need for carry and borrow steps. The present method extends the lensless shadow-casting technique of Tanida and Ichioka (1983, 1985). Three kinds of spatial coding are used for encoding the trinary input states, whereas in the decoding plane three states are identified by no light and light with two orthogonal states of polarization.
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Design of Arithmetic Circuits for Complex Binary Number System
NASA Astrophysics Data System (ADS)
Jamil, Tariq
2011-08-01
Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-07-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-04-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
Concurrent error detecting codes for arithmetic processors
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.
Trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
NASA Astrophysics Data System (ADS)
Maiti, Anup Kumar; Nath Roy, Jitendra; Mukhopadhyay, Sourangshu
2007-08-01
In the field of optical computing and parallel information processing, several number systems have been used for different arithmetic and algebraic operations. Therefore an efficient conversion scheme from one number system to another is very important. Modified trinary number (MTN) has already taken a significant role towards carry and borrow free arithmetic operations. In this communication, we propose a tree-net architecture based all optical conversion scheme from binary number to its MTN form. Optical switch using nonlinear material (NLM) plays an important role.
An Experimental Comparison of an Intrinsically Programed Text and a Narrative Text.
ERIC Educational Resources Information Center
Senter, R. J.; And Others
The study compared three methods of instruction in binary and octal arithmetic, i.e., (1) Norman Crowder's branched programed text, "The Arithmetic of Computers," (2) another version of this text modified so that subjects could not see the instructional material while answering "branching" questions, and (3) a narrative text…
Exploring Hill Ciphers with Graphing Calculators.
ERIC Educational Resources Information Center
St. John, Dennis
1998-01-01
Explains how to code and decode messages using Hill ciphers which combine matrix multiplication and modular arithmetic. Discusses how a graphing calculator can facilitate the matrix and modular arithmetic used in the coding and decoding procedures. (ASK)
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
Bit-Wise Arithmetic Coding For Compression Of Data
NASA Technical Reports Server (NTRS)
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
The design of dual-mode complex signal processors based on quadratic modular number codes
NASA Astrophysics Data System (ADS)
Jenkins, W. K.; Krogmeier, J. V.
1987-04-01
It has been known for a long time that quadratic modular number codes admit an unusual representation of complex numbers which leads to complete decoupling of the real and imaginary channels, thereby simplifying complex multiplication and providing error isolation between the real and imaginary channels. This paper first presents a tutorial review of the theory behind the different types of complex modular rings (fields) that result from particular parameter selections, and then presents a theory for a 'dual-mode' complex signal processor based on the choice of augmented power-of-2 moduli. It is shown how a diminished-1 binary code, used by previous designers for the realization of Fermat number transforms, also leads to efficient realizations for dual-mode complex arithmetic for certain augmented power-of-2 moduli. Then a design is presented for a recursive complex filter based on a ROM/ACCUMULATOR architecture and realized in an augmented power-of-2 quadratic code, and a computer-generated example of a complex recursive filter is shown to illustrate the principles of the theory.
Siemann, Julia; Petermann, Franz
2018-01-01
This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.
One-Time Pad as a nonlinear dynamical system
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin
2012-11-01
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography, DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luröth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized Arithmetic Coding - a scheme for joint compression and encryption.
File compression and encryption based on LLS and arithmetic coding
NASA Astrophysics Data System (ADS)
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
Secret Codes, Remainder Arithmetic, and Matrices.
ERIC Educational Resources Information Center
Peck, Lyman C.
This pamphlet is designed for use as enrichment material for able junior and senior high school students who are interested in mathematics. No more than a clear understanding of basic arithmetic is expected. Students are introduced to ideas from number theory and modern algebra by learning mathematical ways of coding and decoding secret messages.…
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
NASA Astrophysics Data System (ADS)
Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.
2016-02-01
The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.
Multiplier Architecture for Coding Circuits
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.
How Math Anxiety Relates to Number-Space Associations.
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.
How Math Anxiety Relates to Number–Space Associations
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number–space associations constitute a potential risk factor of math anxiety. PMID:27683570
Real time pipelined system for forming the sum of products in the processing of video data
NASA Technical Reports Server (NTRS)
Wilcox, Brian (Inventor)
1988-01-01
A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.
Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M
2017-05-01
A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Making the Tent Function Complex
ERIC Educational Resources Information Center
Sprows, David J.
2010-01-01
This note can be used to illustrate to the student such concepts as periodicity in the complex plane. The basic construction makes use of the Tent function which requires only that the student have some working knowledge of binary arithmetic.
Zhao, Hong-Quan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu
2009-06-17
A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.
Defining the IEEE-854 floating-point standard in PVS
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.
Translation of one high-level language to another: COBOL to ADA, an example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.A.
1986-01-01
This dissertation discusses the difficulties encountered in, and explores possible solutions to, the task of automatically converting programs written in one HLL, COBOL, into programs written in another HLL, Ada, and still maintain readability. This paper presents at least one set of techniques and algorithms to solve many of the problems that were encountered. The differing view of records is solved by isolating those instances where it is a problem, then using the RENAMES option of Ada. Several solutions to doing the decimal-arithmetic translation are discussed. One method used is to emulate COBOL arithmetic in an arithmetic package. Another partialmore » solution suggested is to convert the values to decimal-scaled integers and use modular arithmetic. Conversion to fixed-point type and floating-point type are the third and fourth methods. The work of another researcher, Bobby Othmer, is utilized to correct any unstructured code, to remap statements not directly translatable such as ALTER, and to pull together isolated code sections. Algorithms are then presented to convert this restructured COBOL code into Ada code with local variables, parameters, and packages. The input/output requirements are partially met by mapping them to a series of procedure calls that interface with Ada's standard input-output package. Several examples are given of hand translations of COBOL programs. In addition, a possibly new method is shown for measuring the readability of programs.« less
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Pre-Algebra Groups. Concepts & Applications.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
Discussion material and exercises related to pre-algebra groups are provided in this five chapter manual. Chapter 1 (mappings) focuses on restricted domains, order of operations (parentheses and exponents), rules of assignment, and computer extensions. Chapter 2 considers finite number systems, including binary operations, clock arithmetic,…
Moura, Octávio; Simões, Mário R; Pereira, Marcelino
2014-02-01
This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
Optimization of Particle-in-Cell Codes on RISC Processors
NASA Technical Reports Server (NTRS)
Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.
1996-01-01
General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
1981-08-01
meaning of dimensionality of achievement data. It also shows the importance of construct validity, even in criterion referenced testing of the cognitive ... aspect of performance, and that the traditional means of item analysis that are based on taking the variances of binary scores and content analysis
Fundamentals of Digital Logic.
ERIC Educational Resources Information Center
Noell, Monica L.
This course is designed to prepare electronics personnel for further training in digital techniques, presenting need to know information that is basic to any maintenance course on digital equipment. It consists of seven study units: (1) binary arithmetic; (2) boolean algebra; (3) logic gates; (4) logic flip-flops; (5) nonlogic circuits; (6)…
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Foundational numerical capacities and the origins of dyscalculia.
Butterworth, Brian
2010-12-01
One important cause of very low attainment in arithmetic (dyscalculia) seems to be a core deficit in an inherited foundational capacity for numbers. According to one set of hypotheses, arithmetic ability is built on an inherited system responsible for representing approximate numerosity. One account holds that this is supported by a system for representing exactly a small number (less than or equal to four4) of individual objects. In these approaches, the core deficit in dyscalculia lies in either of these systems. An alternative proposal holds that the deficit lies in an inherited system for sets of objects and operations on them (numerosity coding) on which arithmetic is built. I argue that a deficit in numerosity coding, not in the approximate number system or the small number system, is responsible for dyscalculia. Nevertheless, critical tests should involve both longitudinal studies and intervention, and these have yet to be carried out. Copyright © 2010 Elsevier Ltd. All rights reserved.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer
NASA Technical Reports Server (NTRS)
Turner, Don N.; Huff, Vearl N.
1961-01-01
An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.
The Power of 2: How an Apparently Irregular Numeration System Facilitates Mental Arithmetic
ERIC Educational Resources Information Center
Bender, Andrea; Beller, Sieghard
2017-01-01
Mangarevan traditionally contained two numeration systems: a general one, which was highly regular, decimal, and extraordinarily extensive; and a specific one, which was restricted to specific objects, based on diverging counting units, and interspersed with binary steps. While most of these characteristics are shared by numeration systems in…
Rounding Technique for High-Speed Digital Signal Processing
NASA Technical Reports Server (NTRS)
Wechsler, E. R.
1983-01-01
Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.
ERIC Educational Resources Information Center
Marine Corps, Washington, DC.
Targeted for grades 10 through adult, these military-developed curriculum materials consist of a student lesson book with text readings and review exercises designed to prepare electronic personnel for further training in digital techniques. Covered in the five lessons are binary arithmetic (number systems, decimal systems, the mathematical form…
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.
Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong
2017-05-01
Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.
Application of grammar-based codes for lossless compression of digital mammograms
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah
2006-01-01
A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
Learning Compact Binary Face Descriptor for Face Recognition.
Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie
2015-10-01
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.; Collins, Stuart A., Jr.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Habiby, S F; Collins, S A
1987-11-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Cipora, Krzysztof; Nuerk, Hans-Christoph
2013-01-01
The SNARC (spatial-numerical association of response codes) described that larger numbers are responded faster with the right hand and smaller numbers with the left hand. It is held in the literature that arithmetically skilled and nonskilled adults differ in the SNARC. However, the respective data are descriptive, and the decisive tests are nonsignificant. Possible reasons for this nonsignificance could be that in previous studies (a) very small samples were used, (b) there were too few repetitions producing too little power and, consequently, reliabilities that were too small to reach conventional significance levels for the descriptive skill differences in the SNARC, and (c) general mathematical ability was assessed by the field of study of students, while individual arithmetic skills were not examined. Therefore we used a much bigger sample, a lot more repetitions, and direct assessment of arithmetic skills to explore relations between the SNARC effect and arithmetic skills. Nevertheless, a difference in SNARC effect between arithmetically skilled and nonskilled participants was not obtained. Bayesian analysis showed positive evidence of a true null effect, not just a power problem. Hence we conclude that the idea that arithmetically skilled and nonskilled participants generally differ in the SNARC effect is not warranted by our data.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
An algorithm for the arithmetic classification of multilattices.
Indelicato, Giuliana
2013-01-01
A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
NASA Astrophysics Data System (ADS)
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep
2017-05-01
Binary to octal and octal to binary code converter is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using external gates. In this paper, binary to octal and octal to binary code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
Ramanujan sums for signal processing of low-frequency noise.
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums c(q)(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Ramanujan sums for signal processing of low-frequency noise
NASA Astrophysics Data System (ADS)
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums cq(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
NASA Astrophysics Data System (ADS)
Schudlo, Larissa C.; Chau, Tom
2014-02-01
Objective. Near-infrared spectroscopy (NIRS) has recently gained attention as a modality for brain-computer interfaces (BCIs), which may serve as an alternative access pathway for individuals with severe motor impairments. For NIRS-BCIs to be used as a real communication pathway, reliable online operation must be achieved. Yet, only a limited number of studies have been conducted online to date. These few studies were carried out under a synchronous paradigm and did not accommodate an unconstrained resting state, precluding their practical clinical implication. Furthermore, the potentially discriminative power of spatiotemporal characteristics of activation has yet to be considered in an online NIRS system. Approach. In this study, we developed and evaluated an online system-paced NIRS-BCI which was driven by a mental arithmetic activation task and accommodated an unconstrained rest state. With a dual-wavelength, frequency domain near-infrared spectrometer, measurements were acquired over nine sites of the prefrontal cortex, while ten able-bodied participants selected letters from an on-screen scanning keyboard via intentionally controlled brain activity (using mental arithmetic). Participants were provided dynamic NIR topograms as continuous visual feedback of their brain activity as well as binary feedback of the BCI's decision (i.e. if the letter was selected or not). To classify the hemodynamic activity, temporal features extracted from the NIRS signals and spatiotemporal features extracted from the dynamic NIR topograms were used in a majority vote combination of multiple linear classifiers. Main results. An overall online classification accuracy of 77.4 ± 10.5% was achieved across all participants. The binary feedback was found to be very useful during BCI use, while not all participants found value in the continuous feedback provided. Significance. These results demonstrate that mental arithmetic is a potent mental task for driving an online system-paced NIRS-BCI. BCI feedback that reflects the classifier's decision has the potential to improve user performance. The proposed system can provide a framework for future online NIRS-BCI development and testing.
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
Mal-Xtract: Hidden Code Extraction using Memory Analysis
NASA Astrophysics Data System (ADS)
Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah
2017-01-01
Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Lithium Niobate Arithmetic Logic Unit
1991-03-01
Boot51] A.D. Booth, "A Signed Binary Multiplication Technique," Quarterly Journal of Mechanics and Applied Mathematics , Vol. IV Part 2, 1951. [ChWi79...Trans. Computers, Vol. C-26, No. 7, July 1977, pp. 681-687. [Wake8 I] John F. Wakerly , "Miocrocomputer Architecture and Programming," John Wiley and...different division methods and discusses their applicability to simple bit serial implementation. Several different designs are then presented and
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information
ERIC Educational Resources Information Center
McCallister, Gary
2005-01-01
The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
NASA Astrophysics Data System (ADS)
Nasution, A. B.; Efendi, S.; Suwilo, S.
2018-04-01
The amount of data inserted in the form of audio samples that use 8 bits with LSB algorithm, affect the value of PSNR which resulted in changes in image quality of the insertion (fidelity). So in this research will be inserted audio samples using 5 bits with MLSB algorithm to reduce the number of data insertion where previously the audio sample will be compressed with Arithmetic Coding algorithm to reduce file size. In this research will also be encryption using Triple DES algorithm to better secure audio samples. The result of this research is the value of PSNR more than 50dB so it can be concluded that the image quality is still good because the value of PSNR has exceeded 40dB.
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
NASA Astrophysics Data System (ADS)
Gao, Jian; Wang, Yongkang
2018-01-01
Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.
Efficient algorithms for dilated mappings of binary trees
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf
1990-01-01
The problem is addressed to find a 1-1 mapping of the vertices of a binary tree onto those of a target binary tree such that the son of a node on the first binary tree is mapped onto a descendent of the image of that node in the second binary tree. There are two natural measures of the cost of this mapping, namely the dilation cost, i.e., the maximum distance in the target binary tree between the images of vertices that are adjacent in the original tree. The other measure, expansion cost, is defined as the number of extra nodes/edges to be added to the target binary tree in order to ensure a 1-1 mapping. An efficient algorithm to find a mapping of one binary tree onto another is described. It is shown that it is possible to minimize one cost of mapping at the expense of the other. This problem arises when designing pipelined arithmetic logic units (ALU) for special purpose computers. The pipeline is composed of ALU chips connected in the form of a binary tree. The operands to the pipeline can be supplied to the leaf nodes of the binary tree which then process and pass the results up to their parents. The final result is available at the root. As each new application may require a distinct nesting of operations, it is useful to be able to find a good mapping of a new binary tree over existing ALU tree. Another problem arises if every distinct required binary tree is known beforehand. Here it is useful to hardwire the pipeline in the form of a minimal supertree that contains all required binary trees.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Numbers and Codes in Ancient Peru: The Quipu.
ERIC Educational Resources Information Center
Zepp, Raymond A.
1992-01-01
Describes the Quipu, a mathematical device invented by the Incas of Peru that used the base-10 number system to store information. Suggests ways of incorporating material on the quipu into the arithmetic class. (MDH)
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Vectors a Fortran 90 module for 3-dimensional vector and dyadic arithmetic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, B.C.
1998-02-01
A major advance contained in the new Fortran 90 language standard is the ability to define new data types and the operators associated with them. Writing computer code to implement computations with real and complex three-dimensional vectors and dyadics is greatly simplified if the equations can be implemented directly, without the need to code the vector arithmetic explicitly. The Fortran 90 module described here defines new data types for real and complex 3-dimensional vectors and dyadics, along with the common operations needed to work with these objects. Routines to allow convenient initialization and output of the new types are alsomore » included. In keeping with the philosophy of data abstraction, the details of the implementation of the data types are maintained private, and the functions and operators are made generic to simplify the combining of real, complex, single- and double-precision vectors and dyadics.« less
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Interactive Exploration for Continuously Expanding Neuron Databases.
Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting
2017-02-15
This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Modified signed-digit arithmetic based on redundant bit representation.
Huang, H; Itoh, M; Yatagai, T
1994-09-10
Fully parallel modified signed-digit arithmetic operations are realized based on redundant bit representation of the digits proposed. A new truth-table minimizing technique is presented based on redundant-bitrepresentation coding. It is shown that only 34 minterms are enough for implementing one-step modified signed-digit addition and subtraction with this new representation. Two optical implementation schemes, correlation and matrix multiplication, are described. Experimental demonstrations of the correlation architecture are presented. Both architectures use fixed minterm masks for arbitrary-length operands, taking full advantage of the parallelism of the modified signed-digit number system and optics.
Digital plus analog output encoder
NASA Technical Reports Server (NTRS)
Hafle, R. S. (Inventor)
1976-01-01
The disclosed encoder is adapted to produce both digital and analog output signals corresponding to the angular position of a rotary shaft, or the position of any other movable member. The digital signals comprise a series of binary signals constituting a multidigit code word which defines the angular position of the shaft with a degree of resolution which depends upon the number of digits in the code word. The basic binary signals are produced by photocells actuated by a series of binary tracks on a code disc or member. The analog signals are in the form of a series of ramp signals which are related in length to the least significant bit of the digital code word. The analog signals are derived from sine and cosine tracks on the code disc.
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
PopCORN: Hunting down the differences between binary population synthesis codes
NASA Astrophysics Data System (ADS)
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org
ERIC Educational Resources Information Center
Weisberg, Phyllis G.
1987-01-01
The article offers practical games and "tricks" to help remediate deficits in arithmetic and mathematics at the elementary school level. Games include using the fingers to calculate, lattice multiplication, dividing paper into equal columns, square games, code cracking games, and fraction dominoes. (Author/DB)
On the existence of binary simplex codes. [using combinatorial construction
NASA Technical Reports Server (NTRS)
Taylor, H.
1977-01-01
Using a simple combinatorial construction, the existence of a binary simplex code with m codewords for all m is greater than or equal to 1 is proved. The problem of the shortest possible length is left open.
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P
2017-09-01
Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.
POPCORN: A comparison of binary population synthesis codes
NASA Astrophysics Data System (ADS)
Claeys, J. S. W.; Toonen, S.; Mennekens, N.
2013-01-01
We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
NASA Astrophysics Data System (ADS)
Kumar, Santosh
2017-07-01
Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Optimal periodic binary codes of lengths 28 to 64
NASA Technical Reports Server (NTRS)
Tyler, S.; Keston, R.
1980-01-01
Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
INSPECTION MEANS FOR INDUCTION MOTORS
Williams, A.W.
1959-03-10
an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.
ERIC Educational Resources Information Center
Myerscough, Don; And Others
1996-01-01
Describes an activity whose objectives are to encode and decode messages using linear functions and their inverses; to use modular arithmetic, including use of the reciprocal for simple equation solving; to analyze patterns and make and test conjectures; to communicate procedures and algorithms; and to use problem-solving strategies. (ASK)
FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.
Zou, Ding; Djordjevic, Ivan B
2015-06-01
In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.
A Taxonomy-Based Approach to Shed Light on the Babel of Mathematical Models for Rice Simulation
NASA Technical Reports Server (NTRS)
Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Francoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel;
2016-01-01
For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary similarity measure and the unweighted pair-group method with arithmetic mean. Principal component analysis was performed on model outputs at four sites. Results indicated that (i) differences in structure often resulted in similar predictions and (ii) similar structures can lead to large differences in model outputs. User subjectivity during calibration may have hidden expected relationships between model structure and behaviour. This explanation, if confirmed, highlights the need for shared protocols to reduce the degrees of freedom during calibration, and to limit, in turn, the risk that user subjectivity influences model performance.
The algebraic decoding of the (41, 21, 9) quadratic residue code
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei
1992-01-01
A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.
Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent
2013-12-01
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
NASA Astrophysics Data System (ADS)
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Binary translation using peephole translation rules
Bansal, Sorav; Aiken, Alex
2010-05-04
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
Cognitive Code-Division Channelization
2011-04-01
22] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans...Commun., vol. 51, pp. 48-51, Jan. 2003. [23] C. Ding, M. Golin, and T. Klve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary...receiver pair coexisting with a primary code-division multiple-access ( CDMA ) system. Our objective is to find the optimum transmitting power and code
ERIC Educational Resources Information Center
Kucera, Miloš
2010-01-01
Writing is often considered secondary to the spoken language, as it is only coded sound-by-sound. But other scholars have demonstrated that writing is similar to "arithmetic": a cognitive structuring, a shift to the meta-level ("for the eye"). "Handwriting" (referred to here as the cursive writing in the sense of…
Working Memory and Short-Term Memory Abilities in Accomplished Multilinguals
ERIC Educational Resources Information Center
Biedron, Adriana; Szczepaniak, Anna
2012-01-01
The role of short-term memory and working memory in accomplished multilinguals was investigated. Twenty-eight accomplished multilinguals were compared to 36 mainstream philology students. The following instruments were used in the study: three memory subtests of the Wechsler Intelligence Scale (Digit Span, Digit-Symbol Coding, and Arithmetic,…
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Image compression using quad-tree coding with morphological dilation
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei
2007-11-01
In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Improvements to the construction of binary black hole initial data
NASA Astrophysics Data System (ADS)
Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald P.; Boyle, Michael; Szilágyi, Béla
2015-12-01
Construction of binary black hole initial data is a prerequisite for numerical evolutions of binary black holes. This paper reports improvements to the binary black hole initial data solver in the spectral Einstein code, to allow robust construction of initial data for mass-ratio above 10:1, and for dimensionless black hole spins above 0.9, while improving efficiency for lower mass-ratios and spins. We implement a more flexible domain decomposition, adaptive mesh refinement and an updated method for choosing free parameters. We also introduce a new method to control and eliminate residual linear momentum in initial data for precessing systems, and demonstrate that it eliminates gravitational mode mixing during the evolution. Finally, the new code is applied to construct initial data for hyperbolic scattering and for binaries with very small separation.
Nonlinear, nonbinary cyclic group codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.
Logical NAND and NOR Operations Using Algorithmic Self-assembly of DNA Molecules
NASA Astrophysics Data System (ADS)
Wang, Yanfeng; Cui, Guangzhao; Zhang, Xuncai; Zheng, Yan
DNA self-assembly is the most advanced and versatile system that has been experimentally demonstrated for programmable construction of patterned systems on the molecular scale. It has been demonstrated that the simple binary arithmetic and logical operations can be computed by the process of self assembly of DNA tiles. Here we report a one-dimensional algorithmic self-assembly of DNA triple-crossover molecules that can be used to execute five steps of a logical NAND and NOR operations on a string of binary bits. To achieve this, abstract tiles were translated into DNA tiles based on triple-crossover motifs. Serving as input for the computation, long single stranded DNA molecules were used to nucleate growth of tiles into algorithmic crystals. Our method shows that engineered DNA self-assembly can be treated as a bottom-up design techniques, and can be capable of designing DNA computer organization and architecture.
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
GENPLOT: A formula-based Pascal program for data manipulation and plotting
NASA Astrophysics Data System (ADS)
Kramer, Matthew J.
Geochemical processes involving alteration, differentiation, fractionation, or migration of elements may be elucidated by a number of discrimination or variation diagrams (e.g., AFM, Harker, Pearce, and many others). The construction of these diagrams involves arithmetic combination of selective elements (involving major, minor, or trace elements). GENPLOT utilizes a formula-based algorithm (an expression parser) which enables the program to manipulate multiparameter databases and plot XY, ternary, tetrahedron, and REE type plots without needing to change either the source code or rearranging databases. Formulae may be any quadratic expression whose variables are the column headings of the data matrix. A full-screen editor with limited equations and arithmetic functions (spreadsheet) has been incorporated into the program to aid data entry and editing. Data are stored as ASCII files to facilitate interchange of data between other programs and computers. GENPLOT was developed in Turbo Pascal for the IBM and compatible computers but also is available in Apple Pascal for the Apple Ile and Ill. Because the source code is too extensive to list here (about 5200 lines of Pascal code), the expression parsing routine, which is central to GENPLOT's flexibility is incorporated into a smaller demonstration program named SOLVE. The following paper includes a discussion on how the expression parser works and a detailed description of GENPLOT's capabilities.
BHDD: Primordial black hole binaries code
NASA Astrophysics Data System (ADS)
Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco
2018-06-01
BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
Constructing binary black hole initial data with high mass ratios and spins
NASA Astrophysics Data System (ADS)
Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald; Szilagyi, Bela; Simulating Extreme Spacetimes Collaboration
2015-04-01
Binary black hole systems have now been successfully modelled in full numerical relativity by many groups. In order to explore high-mass-ratio (larger than 1:10), high-spin systems (above 0.9 of the maximal BH spin), we revisit the initial-data problem for binary black holes. The initial-data solver in the Spectral Einstein Code (SpEC) was not able to solve for such initial data reliably and robustly. I will present recent improvements to this solver, among them adaptive mesh refinement and control of motion of the center of mass of the binary, and will discuss the much larger region of parameter space this code can now address.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Golay sequences coded coherent optical OFDM for long-haul transmission
NASA Astrophysics Data System (ADS)
Qin, Cui; Ma, Xiangrong; Hua, Tao; Zhao, Jing; Yu, Huilong; Zhang, Jian
2017-09-01
We propose to use binary Golay sequences in coherent optical orthogonal frequency division multiplexing (CO-OFDM) to improve the long-haul transmission performance. The Golay sequences are generated by binary Reed-Muller codes, which have low peak-to-average power ratio and certain error correction capability. A low-complexity decoding algorithm for the Golay sequences is then proposed to recover the signal. Under same spectral efficiency, the QPSK modulated OFDM with binary Golay sequences coding with and without discrete Fourier transform (DFT) spreading (DFTS-QPSK-GOFDM and QPSK-GOFDM) are compared with the normal BPSK modulated OFDM with and without DFT spreading (DFTS-BPSK-OFDM and BPSK-OFDM) after long-haul transmission. At a 7% forward error correction code threshold (Q2 factor of 8.5 dB), it is shown that DFTS-QPSK-GOFDM outperforms DFTS-BPSK-OFDM by extending the transmission distance by 29% and 18%, in non-dispersion managed and dispersion managed links, respectively.
Composite hot subdwarf binaries - I. The spectroscopically confirmed sdB sample
NASA Astrophysics Data System (ADS)
Vos, Joris; Németh, Péter; Vučković, Maja; Østensen, Roy; Parsons, Steven
2018-01-01
Hot subdwarf-B (sdB) stars in long-period binaries are found to be on eccentric orbits, even though current binary-evolution theory predicts that these objects are circularized before the onset of Roche lobe overflow (RLOF). To increase our understanding of binary interaction processes during the RLOF phase, we started a long-term observing campaign to study wide sdB binaries. In this paper, we present a sample of composite binary sdBs, and the results of the spectral analysis of nine such systems. The grid search in stellar parameters (GSSP) code is used to derive atmospheric parameters for the cool companions. To cross-check our results and also to characterize the hot subdwarfs, we used the independent XTGRID code, which employs TLUSTY non-local thermodynamic equilibrium models to derive atmospheric parameters for the sdB component and PHOENIX synthetic spectra for the cool companions. The independent GSSP and XTGRID codes are found to show good agreement for three test systems that have atmospheric parameters available in the literature. Based on the rotational velocity of the companions, we make an estimate for the mass accreted during the RLOF phase and the minimum duration of that phase. We find that the mass transfer to the companion is minimal during the subdwarf formation.
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Choi, Han; Kim, Jeong-Youn; Chang, Won-Du; Kim, Do-Won; Kim, Kiwoong; Jo, Sungho; Im, Chang-Hwan
2016-09-01
In traditional brain-computer interface (BCI) studies, binary communication systems have generally been implemented using two mental tasks arbitrarily assigned to "yes" or "no" intentions (e.g., mental arithmetic calculation for "yes"). A recent pilot study performed with one paralyzed patient showed the possibility of a more intuitive paradigm for binary BCI communications, in which the patient's internal yes/no intentions were directly decoded from functional near-infrared spectroscopy (fNIRS). We investigated whether such an "fNIRS-based direct intention decoding" paradigm can be reliably used for practical BCI communications. Eight healthy subjects participated in this study, and each participant was administered 70 disjunctive questions. Brain hemodynamic responses were recorded using a multichannel fNIRS device, while the participants were internally expressing "yes" or "no" intentions to each question. Different feature types, feature numbers, and time window sizes were tested to investigate optimal conditions for classifying the internal binary intentions. About 75% of the answers were correctly classified when the individual best feature set was employed (75.89% ±1.39 and 74.08% ±2.87 for oxygenated and deoxygenated hemoglobin responses, respectively), which was significantly higher than a random chance level (68.57% for p<0.001). The kurtosis feature showed the highest mean classification accuracy among all feature types. The grand-averaged hemodynamic responses showed that wide brain regions are associated with the processing of binary implicit intentions. Our experimental results demonstrated that direct decoding of internal binary intention has the potential to be used for implementing more intuitive and user-friendly communication systems for patients with motor disabilities.
NASA Astrophysics Data System (ADS)
Ma, Fanghui; Gao, Jian; Fu, Fang-Wei
2018-06-01
Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.
A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations
Motl, Patrick M.; Frank, Juhan; Staff, Jan; ...
2017-03-29
There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume "grid" code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code aremore » chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. Here, we also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less
Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.
We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. Wemore » test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.« less
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph
2017-01-01
Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.
Binary Neutron Stars with Arbitrary Spins in Numerical Relativity
NASA Astrophysics Data System (ADS)
Pfeiffer, Harald; Tacik, Nick; Foucart, Francois; Haas, Roland; Kaplan, Jeffrey; Muhlberger, Curran; Duez, Matt; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela
2015-04-01
We present a code to construct initial data for binary neutron star where the stars are rotating. Our code, based on the formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ~ 0 . 1 % . Preliminary evolutions show that spin- and orbit-precession of Neutron stars is well described by post-Newtonian approximation. The neutron stars show quasi-normal mode oscillations at an amplitude which increases with the rotation rate of the stars.
Binary neutron stars with arbitrary spins in numerical relativity
NASA Astrophysics Data System (ADS)
Tacik, Nick; Foucart, Francois; Pfeiffer, Harald P.; Haas, Roland; Ossokine, Serguei; Kaplan, Jeff; Muhlberger, Curran; Duez, Matt D.; Kidder, Lawrence E.; Scheel, Mark A.; Szilágyi, Béla
2015-12-01
We present a code to construct initial data for binary neutron star systems in which the stars are rotating. Our code, based on a formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We compute the neutron star angular momentum through quasilocal angular momentum integrals. When constructing irrotational binary neutron stars, we find a very small residual dimensionless spin of ˜2 ×10-4 . Evolutions of rotating neutron star binaries show that the magnitude of the stars' angular momentum is conserved, and that the spin and orbit precession of the stars is well described by post-Newtonian approximation. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ˜0.1 % . The neutron stars show quasinormal mode oscillations at an amplitude which increases with the rotation rate of the stars.
Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.
ERIC Educational Resources Information Center
Pradels, Jean Louis
Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…
ERIC Educational Resources Information Center
Haro, Elizabeth K.; Haro, Luis S.
2014-01-01
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
Characteristic Evolution and Matching
NASA Astrophysics Data System (ADS)
Winicour, Jeffrey
2012-01-01
I review the development of numerical evolution codes for general relativity based upon the characteristic initial-value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D-axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black-hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black-hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.
A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motl, Patrick M.; Frank, Juhan; Clayton, Geoffrey C.
2017-04-01
There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to matchmore » as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less
A flexible surface wetness sensor using a RFID technique.
Yang, Cheng-Hao; Chien, Jui-Hung; Wang, Bo-Yan; Chen, Ping-Hei; Lee, Da-Sheng
2008-02-01
This paper presents a flexible wetness sensor whose detection signal, converted to a binary code, is transmitted through radio-frequency (RF) waves from a radio-frequency identification integrated circuit (RFID IC) to a remote reader. The flexible sensor, with a fixed operating frequency of 13.56 MHz, contains a RFID IC and a sensor circuit that is fabricated on a flexible printed circuit board (FPCB) using a Micro-Electro-Mechanical-System (MEMS) process. The sensor circuit contains a comb-shaped sensing area surrounded by an octagonal antenna with a width of 2.7 cm. The binary code transmitted from the RFIC to the reader changes if the surface conditions of the detector surface changes from dry to wet. This variation in the binary code can be observed on a digital oscilloscope connected to the reader.
Hwang, Han-Jeong; Choi, Han; Kim, Jeong-Youn; Chang, Won-Du; Kim, Do-Won; Kim, Kiwoong; Jo, Sungho; Im, Chang-Hwan
2016-09-01
In traditional brain-computer interface (BCI) studies, binary communication systems have generally been implemented using two mental tasks arbitrarily assigned to “yes” or “no” intentions (e.g., mental arithmetic calculation for “yes”). A recent pilot study performed with one paralyzed patient showed the possibility of a more intuitive paradigm for binary BCI communications, in which the patient’s internal yes/no intentions were directly decoded from functional near-infrared spectroscopy (fNIRS). We investigated whether such an “fNIRS-based direct intention decoding” paradigm can be reliably used for practical BCI communications. Eight healthy subjects participated in this study, and each participant was administered 70 disjunctive questions. Brain hemodynamic responses were recorded using a multichannel fNIRS device, while the participants were internally expressing “yes” or “no” intentions to each question. Different feature types, feature numbers, and time window sizes were tested to investigate optimal conditions for classifying the internal binary intentions. About 75% of the answers were correctly classified when the individual best feature set was employed (75.89% ± 1.39 and 74.08% ± 2.87 for oxygenated and deoxygenated hemoglobin responses, respectively), which was significantly higher than a random chance level (68.57% for p < 0.001). The kurtosis feature showed the highest mean classification accuracy among all feature types. The grand-averaged hemodynamic responses showed that wide brain regions are associated with the processing of binary implicit intentions. Our experimental results demonstrated that direct decoding of internal binary intention has the potential to be used for implementing more intuitive and user-friendly communication systems for patients with motor disabilities.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan
2018-01-01
The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
Interactive Software For Astrodynamical Calculations
NASA Technical Reports Server (NTRS)
Schlaifer, Ronald S.; Skinner, David L.; Roberts, Phillip H.
1995-01-01
QUICK computer program provides user with facilities of sophisticated desk calculator performing scalar, vector, and matrix arithmetic; propagate conic-section orbits; determines planetary and satellite coordinates; and performs other related astrodynamic calculations within FORTRAN-like software environment. QUICK is interpreter, and no need to use compiler or linker to run QUICK code. Outputs plotted in variety of formats on variety of terminals. Written in RATFOR.
ERIC Educational Resources Information Center
Kyriakidou-Christofidou, Athina
2016-01-01
The present mixed-methods quasi-experimental study (embedding a case study and a mixed factorial within-between ANOVA test), conducted in a private English school in Limassol, Cyprus, investigated how the use of the schematic learning aids (researcher-made color-coded flash-cards and grids) influence year-2 children's ability to read, write and…
Vector-matrix-quaternion, array and arithmetic packages: All HAL/S functions implemented in Ada
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Kwong, David D.
1986-01-01
The HAL/S avionics programmers have enjoyed a variety of tools built into a language tailored to their special requirements. Ada is designed for a broader group of applications. Rather than providing built-in tools, Ada provides the elements with which users can build their own. Standard avionic packages remain to be developed. These must enable programmers to code in Ada as they have coded in HAL/S. The packages under development at JPL will provide all of the vector-matrix, array, and arithmetic functions described in the HAL/S manuals. In addition, the linear algebra package will provide all of the quaternion functions used in Shuttle steering and Galileo attitude control. Furthermore, using Ada's extensibility, many quaternion functions are being implemented as infix operations; equivalent capabilities were never implemented in HAL/S because doing so would entail modifying the compiler and expanding the language. With these packages, many HAL/S expressions will compile and execute in Ada, unchanged. Others can be converted simply by replacing the implicit HAL/S multiply operator with the Ada *. Errors will be trapped and identified. Input/output will be convenient and readable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kenta; Department of Chemistry, Biology, and Biotechnology, University of Perugia, 06123 Perugia; Gotoda, Hiroshi
2016-05-15
The convective motions within a solution of a photochromic spiro-oxazine being irradiated by UV only on the bottom part of its volume, give rise to aperiodic spectrophotometric dynamics. In this paper, we study three nonlinear properties of the aperiodic time series: permutation entropy, short-term predictability and long-term unpredictability, and degree distribution of the visibility graph networks. After ascertaining the extracted chaotic features, we show how the aperiodic time series can be exploited to implement all the fundamental two-inputs binary logic functions (AND, OR, NAND, NOR, XOR, and XNOR) and some basic arithmetic operations (half-adder, full-adder, half-subtractor). This is possible duemore » to the wide range of states a nonlinear system accesses in the course of its evolution. Therefore, the solution of the convective photochemical oscillator results in hardware for chaos-computing alternative to conventional complementary metal-oxide semiconductor-based integrated circuits.« less
Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin
2009-01-01
Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L. J.; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two ‘one-process’ models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a ‘two-process’ model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger vein recognition based on local directional code.
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-11-05
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
A m-ary linear feedback shift register with binary logic
NASA Technical Reports Server (NTRS)
Perlman, M. (Inventor)
1973-01-01
A family of m-ary linear feedback shift registers with binary logic is disclosed. Each m-ary linear feedback shift register with binary logic generates a binary representation of a nonbinary recurring sequence, producible with a m-ary linear feedback shift register without binary logic in which m is greater than 2. The state table of a m-ary linear feedback shift register without binary logic, utilizing sum modulo m feedback, is first tubulated for a given initial state. The entries in the state table are coded in binary and the binary entries are used to set the initial states of the stages of a plurality of binary shift registers. A single feedback logic unit is employed which provides a separate feedback binary digit to each binary register as a function of the states of corresponding stages of the binary registers.
Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Mishra, Prashant Kumar; Chattopadhyay, Manju K.
2018-03-01
Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.
Dynamical genetic programming in XCSF.
Preen, Richard J; Bull, Larry
2013-01-01
A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to artificial neural networks. This paper presents results from an investigation into using a temporally dynamic symbolic representation within the XCSF learning classifier system. In particular, dynamical arithmetic networks are used to represent the traditional condition-action production system rules to solve continuous-valued reinforcement learning problems and to perform symbolic regression, finding competitive performance with traditional genetic programming on a number of composite polynomial tasks. In addition, the network outputs are later repeatedly sampled at varying temporal intervals to perform multistep-ahead predictions of a financial time series.
NASA Astrophysics Data System (ADS)
Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew
2005-02-01
Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.
A decoding procedure for the Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lim, R. S.
1978-01-01
A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Compact binary hashing for music retrieval
NASA Astrophysics Data System (ADS)
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
A new technique for calculations of binary stellar evolution, with application to magnetic braking
NASA Technical Reports Server (NTRS)
Rappaport, S.; Joss, P. C.; Verbunt, F.
1983-01-01
The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
NASA Astrophysics Data System (ADS)
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
Gene-specific cell labeling using MiMIC transposons
Gnerer, Joshua P.; Venken, Koen J. T.; Dierick, Herman A.
2015-01-01
Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. PMID:25712101
Principles of the Machine Arithmetic of Complex Numbers,
1981-03-31
c, d 0). We have h-- , -- tk2 k, .(2.1) Key: (1). and. (2). ts.ch. Hence cx dy = k, p1, (2.2) -dx I cy=h2 p ,• (2.3) DOC 81024003 PAGE The joint...positional code of a number and do not require in connection with this aiditional procedures. DOC -810241007 PAGE Deficiency /lack is the fact that range K
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Chopper-stabilized phase detector
NASA Technical Reports Server (NTRS)
Hopkins, P. M.
1978-01-01
Phase-detector circuit for binary-tracking loops and other binary-data acquisition systems minimizes effects of drift, gain imbalance, and voltage offset in detector circuitry. Input signal passes simultaneously through two channels where it is mixed with early and late codes that are alternately switched between channels. Code switching is synchronized with polarity switching of detector output of each channel so that each channel uses each detector for half time. Net result is that dc offset errors are canceled, and effect of gain imbalance is simply change in sensitivity.
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
NASA Astrophysics Data System (ADS)
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
The use of imprecise processing to improve accuracy in weather & climate prediction
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, T. N.
2014-08-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
GPU accelerated manifold correction method for spinning compact binaries
NASA Astrophysics Data System (ADS)
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
The Definition and Implementation of a Computer Programming Language Based on Constraints.
1980-08-01
though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that IISP , say...and detecting and resolving conflicts, just as iisp provides certain services such as automatic storage management, which records given dala in a...defined- it permits the statement of equalities and some simple arithmetic relationships. An implementation representation is chosen, and IISP code for a
Data Compression Using the Dictionary Approach Algorithm
1990-12-01
Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM
Synergism and Combinatorial Coding for Binary Odor Mixture Perception in Drosophila
Chakraborty, Tuhin Subhra; Siddiqi, Obaid
2016-01-01
Most odors in the natural environment are mixtures of several compounds. Olfactory receptors housed in the olfactory sensory neurons detect these odors and transmit the information to the brain, leading to decision-making. But whether the olfactory system detects the ingredients of a mixture separately or treats mixtures as different entities is not well understood. Using Drosophila melanogaster as a model system, we have demonstrated that fruit flies perceive binary odor mixtures in a manner that is heavily dependent on both the proportion and the degree of dilution of the components, suggesting a combinatorial coding at the peripheral level. This coding strategy appears to be receptor specific and is independent of interneuronal interactions. PMID:27588303
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.
1989-01-01
An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.
Ando, S; Sekine, S; Mita, M; Katsuo, S
1989-12-15
An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.
Numerical ‘health check’ for scientific codes: the CADNA approach
NASA Astrophysics Data System (ADS)
Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.
2007-04-01
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.
Context-Aware Local Binary Feature Learning for Face Recognition.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2018-05-01
In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.
Gene-specific cell labeling using MiMIC transposons.
Gnerer, Joshua P; Venken, Koen J T; Dierick, Herman A
2015-04-30
Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Binary Multidimensional Scaling for Hashing.
Huang, Yameng; Lin, Zhouchen
2017-10-04
Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Coding/modulation trade-offs for Shuttle wideband data links
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.; Trumpis, B. D.
1974-01-01
This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.
Direct-Sequence Spread Spectrum System
1990-06-01
by directly modulating a conventional narrowband frequency-modulated (FM) carrier by a high rate digital code. The direct modulation is binary phase ...specification of the DSSS system will not be developed. The results of the evaluation phase of this research will be compared against theoretical...spread spectrum is called binary phase -shift keying 19 (BPSK). BPSK is a modulation in which a binary Ŕ" represents a 0-degree relative phase
Numerical Simulations of Dynamical Mass Transfer in Binaries
NASA Astrophysics Data System (ADS)
Motl, P. M.; Frank, J.; Tohline, J. E.
1999-05-01
We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
NASA Astrophysics Data System (ADS)
Bhattachryya, Arunava; Kumar Gayen, Dilip; Chattopadhyay, Tanay
2013-04-01
All-optical 4-bit binary to binary coded decimal (BCD) converter has been proposed and described, with the help of semiconductor optical amplifier (SOA)-assisted Sagnac interferometric switches in this manuscript. The paper describes all-optical conversion scheme using a set of all-optical switches. BCD is common in computer systems that display numeric values, especially in those consisting solely of digital logic with no microprocessor. In many personal computers, the basic input/output system (BIOS) keep the date and time in BCD format. The operations of the circuit are studied theoretically and analyzed through numerical simulations. The model accounts for the SOA small signal gain, line-width enhancement factor and carrier lifetime, the switching pulse energy and width, and the Sagnac loop asymmetry. By undertaking a detailed numerical simulation the influence of these key parameters on the metrics that determine the quality of switching is thoroughly investigated.
Thermal Timescale Mass Transfer In Binary Population Synthesis
NASA Astrophysics Data System (ADS)
Justham, S.; Kolb, U.
2004-07-01
Studies of binary evolution have, until recently, neglected thermal timescale mass transfer (TTMT). Recent work has suggested that this previously poorly studied area is crucial in the understanding of systems across the compact binary spectrum. We use the state-of-the-art binary population synthesis code BiSEPS (Willems and Kolb, 2002, MNRAS 337 1004-1016). However, the present treatment of TTMT is incomplete due to the nonlinear behaviour of stars in their departure from gravothermal `equilibrium'. Here we show work that should update the ultrafast stellar evolution algorithms within BiSEPS to make it the first pseudo-analytic code that can follow TTMT properly. We have generated fits to a set of over 300 Case B TTMT sequences with a range of intermediate-mass donors. These fits produce very good first approximations to both HR diagrams and mass-transfer rates (see figures 1 and 2), which we later hope to improve and extend. They are already a significant improvement over the previous fits.
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Can Binary Population Synthesis Models Be Tested With Hot Subdwarfs ?
NASA Astrophysics Data System (ADS)
Kopparapu, Ravi Kumar; Wade, R. A.; O'Shaughnessy, R.
2007-12-01
Models of binary star interactions have been successful in explaining the origin of field hot subdwarf (sdB) stars in short period systems. The hydrogen envelopes around these core He-burning stars are removed in a "common envelope" evolutionary phase. Reasonably clean samples of short-period sdB+WD or sdB+dM systems exist, that allow the common envelope ejection efficiency to be estimated for wider use in binary population synthesis (BPS) codes. About one-third of known sdB stars, however, are found in longer-period systems with a cool G or K star companion. These systems may have formed through Roche-lobe overflow (RLOF) mass transfer from the present sdB to its companion. They have received less attention, because the existing catalogues are believed to have severe selection biases against these systems, and because their long, slow orbits are difficult to measure. Are these known sdB+cool systems worth intense observational effort? That is, can they be used to make a valid and useful test of the RLOF process in BPS codes? We use the Binary Stellar Evolution (BSE) code of Hurley et al. (2002), mapping sets of initial binaries into present-day binaries that include sdBs, and distinguishing "observable" sdBs from "hidden" ones. We aim to find out whether (1) the existing catalogues of sdBs are sufficiently fair samples of the kinds of sdB binaries that theory predicts, to allow testing or refinement of RLOF models; or instead whether (2) large predicted hidden populations mandate the construction of new catalogues, perhaps using wide-field imaging surveys such as 2MASS, SDSS, and Galex. This work has been partially supported by NASA grant NNG05GE11G and NSF grants PHY 03-26281, PHY 06-00953 and PHY 06-53462. This work is also supported by the Center for Gravitational Wave Physics, which is supported by the National Science Foundation under cooperative agreement PHY 01-14375.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
Formation of the first three gravitational-wave observations through isolated binary evolution
Stevenson, Simon; Vigna-Gómez, Alejandro; Mandel, Ilya; Barrett, Jim W.; Neijssel, Coenraad J.; Perkins, David; de Mink, Selma E.
2017-01-01
During its first four months of taking data, Advanced LIGO has detected gravitational waves from two binary black hole mergers, GW150914 and GW151226, along with the statistically less significant binary black hole merger candidate LVT151012. Here we use the rapid binary population synthesis code COMPAS to show that all three events can be explained by a single evolutionary channel—classical isolated binary evolution via mass transfer including a common envelope phase. We show all three events could have formed in low-metallicity environments (Z=0.001) from progenitor binaries with typical total masses ≳160M⊙, ≳60M⊙ and ≳90M⊙, for GW150914, GW151226 and LVT151012, respectively. PMID:28378739
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Analysis and Defense of Vulnerabilities in Binary Code
2008-09-29
language . We demonstrate our techniques by automatically generating input filters from vulnerable binary programs. vi Acknowledgments I thank my wife, family...21 2.2 The Vine Intermediate Language . . . . . . . . . . . . . . . . . . . . . . 21 ix 2.2.1 Normalized Memory...The Traditional Weakest Precondition Semantics . . . . . . . . . . . . . 44 3.2.1 The Guarded Command Language . . . . . . . . . . . . . . . . . 44
Distribution of compact object mergers around galaxies
NASA Astrophysics Data System (ADS)
Bulik, T.; Belczyński, K.; Zbijewski, W.
1999-09-01
Compact object mergers are one of the favoured models of gamma ray bursts (GRB). Using a binary population synthesis code we calculate properties of the population of compact object binaries; e.g. lifetimes and velocities. We then propagate them in galactic potentials and find their distribution in relation to the host.
A Biosequence-based Approach to Software Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oehmen, Christopher S.; Peterson, Elena S.; Phillips, Aaron R.
For many applications, it is desirable to have some process for recognizing when software binaries are closely related without relying on them to be identical or have identical segments. Some examples include monitoring utilization of high performance computing centers or service clouds, detecting freeware in licensed code, and enforcing application whitelists. But doing so in a dynamic environment is a nontrivial task because most approaches to software similarity require extensive and time-consuming analysis of a binary, or they fail to recognize executables that are similar but nonidentical. Presented herein is a novel biosequence-based method for quantifying similarity of executable binaries.more » Using this method, it is shown in an example application on large-scale multi-author codes that 1) the biosequence-based method has a statistical performance in recognizing and distinguishing between a collection of real-world high performance computing applications better than 90% of ideal; and 2) an example of using family tree analysis to tune identification for a code subfamily can achieve better than 99% of ideal performance.« less
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Advances in Black-Hole Mergers: Spins and Unequal Masses
NASA Technical Reports Server (NTRS)
Kelly, Bernard
2007-01-01
The last two years have seen incredible development in numerical relativity: from fractions of an orbit, evolutions of an equal-mass binary have reached multiple orbits, and convergent gravitational waveforms have been produced from several research groups and numerical codes. We are now able to move our attention from pure numerics to astrophysics, and address scenarios relevant to current and future gravitational-wave detectors.Over the last 12 months at NASA Goddard, we have extended the accuracy of our Hahn-Dol code, and used it to move toward these goals. We have achieved high-accuracy simulations of black-hole binaries of low initial eccentricity, with enough orbits of inspiral before merger to allow us to produce hybrid waveforms that reflect accurately the entire lifetime of the BH binary. We are extending this work, looking at the effects of unequal masses and spins.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
NASA Astrophysics Data System (ADS)
Yakut, Kadri
2015-08-01
We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
Predicting Arithmetic Abilities: The Role of Preparatory Arithmetic Markers and Intelligence
ERIC Educational Resources Information Center
Stock, Pieter; Desoete, Annemie; Roeyers, Herbert
2009-01-01
Arithmetic abilities acquired in kindergarten are found to be strong predictors for later deficient arithmetic abilities. This longitudinal study (N = 684) was designed to examine if it was possible to predict the level of children's arithmetic abilities in first and second grade from their performance on preparatory arithmetic abilities in…
Contamination of RR Lyrae stars from Binary Evolution Pulsators
NASA Astrophysics Data System (ADS)
Karczmarek, Paulina; Pietrzyński, Grzegorz; Belczyński, Krzysztof; Stępień, Kazimierz; Wiktorowicz, Grzegorz; Iłkiewicz, Krystian
2016-06-01
Binary Evolution Pulsator (BEP) is an extremely low-mass member of a binary system, which pulsates as a result of a former mass transfer to its companion. BEP mimics RR Lyrae-type pulsations but has different internal structure and evolution history. We present possible evolution channels to produce BEPs, and evaluate the contamination value, i.e. how many objects classified as RR Lyrae stars can be undetected BEPs. In this analysis we use population synthesis code StarTrack.
Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650
NASA Astrophysics Data System (ADS)
Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.
2018-04-01
We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.
Simulations of binary black hole mergers
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2017-01-01
Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
Optical programmable Boolean logic unit.
Chattopadhyay, Tanay
2011-11-10
Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.
Development of common neural representations for distinct numerical problems
Chang, Ting-Ting; Rosenberg-Lee, Miriam; Metcalfe, Arron W. S.; Chen, Tianwen; Menon, Vinod
2015-01-01
How the brain develops representations for abstract cognitive problems is a major unaddressed question in neuroscience. Here we tackle this fundamental question using arithmetic problem solving, a cognitive domain important for the development of mathematical reasoning. We first examined whether adults demonstrate common neural representations for addition and subtraction problems, two complementary arithmetic operations that manipulate the same quantities. We then examined how the common neural representations for the two problem types change with development. Whole-brain multivoxel representational similarity (MRS) analysis was conducted to examine common coding of addition and subtraction problems in children and adults. We found that adults exhibited significant levels of MRS between the two problem types, not only in the intra-parietal sulcus (IPS) region of the posterior parietal cortex (PPC), but also in ventral temporal-occipital, anterior temporal and dorsolateral prefrontal cortices. Relative to adults, children showed significantly reduced levels of MRS in these same regions. In contrast, no brain areas showed significantly greater MRS between problem types in children. Our findings provide novel evidence that the emergence of arithmetic problem solving skills from childhood to adulthood is characterized by maturation of common neural representations between distinct numerical operations, and involve distributed brain regions important for representing and manipulating numerical quantity. More broadly, our findings demonstrate that representational analysis provides a powerful approach for uncovering fundamental mechanisms by which children develop proficiencies that are a hallmark of human cognition. PMID:26160287
Error Correcting Codes and Related Designs
1990-09-30
Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Design of RISC Processor Using VHDL and Cadence
NASA Astrophysics Data System (ADS)
Moslehpour, Saeid; Puliroju, Chandrasekhar; Abu-Aisheh, Akram
The project deals about development of a basic RISC processor. The processor is designed with basic architecture consisting of internal modules like clock generator, memory, program counter, instruction register, accumulator, arithmetic and logic unit and decoder. This processor is mainly used for simple general purpose like arithmetic operations and which can be further developed for general purpose processor by increasing the size of the instruction register. The processor is designed in VHDL by using Xilinx 8.1i version. The present project also serves as an application of the knowledge gained from past studies of the PSPICE program. The study will show how PSPICE can be used to simplify massive complex circuits designed in VHDL Synthesis. The purpose of the project is to explore the designed RISC model piece by piece, examine and understand the Input/ Output pins, and to show how the VHDL synthesis code can be converted to a simplified PSPICE model. The project will also serve as a collection of various research materials about the pieces of the circuit.
A novel architecture of non-volatile magnetic arithmetic logic unit using magnetic tunnel junctions
NASA Astrophysics Data System (ADS)
Guo, Wei; Prenat, Guillaume; Dieny, Bernard
2014-04-01
Complementary metal-oxide-semiconductor (CMOS) technology is facing increasingly difficult obstacles such as power consumption and interconnection delay. Novel hybrid technologies and architectures are being investigated with the aim to circumvent some of these limits. In particular, hybrid CMOS/magnetic technology based on magnetic tunnel junctions (MTJs) is considered as a very promising approach thanks to the full compatibility of MTJs with CMOS technology. By tightly merging the conventional electronics with magnetism, both logic and memory functions can be implemented in the same device. As a result, non-volatility is directly brought into logic circuits, yielding significant improvement of device performances and new functionalities as well. We have conceived an innovative methodology to construct non-volatile magnetic arithmetic logic units (MALUs) combining spin-transfer torque MTJs with MOS transistors. The present 4-bit MALU utilizes 4 MTJ pairs to store its operation code (opcode). Its operations and performances have been confirmed and evaluated through electrical simulations.
The fast decoding of Reed-Solomon codes using number theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Welch, L. R.; Truong, T. K.
1976-01-01
It is shown that Reed-Solomon (RS) codes can be encoded and decoded by using a fast Fourier transform (FFT) algorithm over finite fields. The arithmetic utilized to perform these transforms requires only integer additions, circular shifts and a minimum number of integer multiplications. The computing time of this transform encoder-decoder for RS codes is less than the time of the standard method for RS codes. More generally, the field GF(q) is also considered, where q is a prime of the form K x 2 to the nth power + 1 and K and n are integers. GF(q) can be used to decode very long RS codes by an efficient FFT algorithm with an improvement in the number of symbols. It is shown that a radix-8 FFT algorithm over GF(q squared) can be utilized to encode and decode very long RS codes with a large number of symbols. For eight symbols in GF(q squared), this transform over GF(q squared) can be made simpler than any other known number theoretic transform with a similar capability. Of special interest is the decoding of a 16-tuple RS code with four errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong
Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System
NASA Astrophysics Data System (ADS)
Suen, Wai-Mo
1998-04-01
NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.
Design of a Microprogram Control Unit with Concurrent Error Detection.
1984-08-01
I fxoot Office of Naval Research N/A N00039-80-C-0556 ta. ADDRESS (City. St.. and ZIP Cod 10. SOURCE OF FUNOING N0. -PROGRAM PROJECT TASK WORK UNIT...However, the CED concept is mainly applied to various codes data transmission, and simple functional units, such as arithmetic units. Little work has...been done in the control unit area. Previous work is primarily in the use of clanical self-checking circuits, using bit slicin& parity, and m-out-of-n
BIT BY BIT: A Game Simulating Natural Language Processing in Computers
ERIC Educational Resources Information Center
Kato, Taichi; Arakawa, Chuichi
2008-01-01
BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…
Quality of Arithmetic Education for Children with Cerebral Palsy
ERIC Educational Resources Information Center
Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.; Withagen, Floortje
2010-01-01
The aim of this exploratory study was to investigate the quality of arithmetic education for children with cerebral palsy. The use of individual educational plans, amount of arithmetic instruction time, arithmetic instructional grouping, and type of arithmetic teaching method were explored in three groups: children with cerebral palsy (CP) in…
Wong, Terry Tin-Yau
2017-12-01
The current study examined the unique and shared contributions of arithmetic operation understanding and numerical magnitude representation to children's mathematics achievement. A sample of 124 fourth graders was tested on their arithmetic operation understanding (as reflected by their understanding of arithmetic principles and the knowledge about the application of arithmetic operations) and their precision of rational number magnitude representation. They were also tested on their mathematics achievement and arithmetic computation performance as well as the potential confounding factors. The findings suggested that both arithmetic operation understanding and numerical magnitude representation uniquely predicted children's mathematics achievement. The findings highlight the significance of arithmetic operation understanding in mathematics learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Träff, Ulf; Olsson, Linda; Skagerlund, Kenny; Östergren, Rickard
2018-03-01
A modified pathways to mathematics model was used to examine the cognitive mechanisms underlying arithmetic skills in third graders. A total of 269 children were assessed on tasks tapping the four pathways and arithmetic skills. A path analysis showed that symbolic number processing was directly supported by the linguistic and approximate quantitative pathways. The direct contribution from the four pathways to arithmetic proficiency varied; the linguistic pathway supported single-digit arithmetic and word problem solving, whereas the approximate quantitative pathway supported only multi-digit calculation. The spatial processing and verbal working memory pathways supported only arithmetic word problem solving. The notion of hierarchical levels of arithmetic was supported by the results, and the different levels were supported by different constellations of pathways. However, the strongest support to the hierarchical levels of arithmetic were provided by the proximal arithmetic skills. Copyright © 2017 Elsevier Inc. All rights reserved.
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
The Base 32 Method: An Improved Method for Coding Sibling Constellations.
ERIC Educational Resources Information Center
Perfetti, Lawrence J. Carpenter
1990-01-01
Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)
Dynamic fisheye grids for binary black hole simulations
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Noble, Scott C.
2014-03-01
We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.
Emission-line diagnostics of nearby H II regions including interacting binary populations
NASA Astrophysics Data System (ADS)
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-06-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
Extracting the information of coastline shape and its multiple representations
NASA Astrophysics Data System (ADS)
Liu, Ying; Li, Shujun; Tian, Zhen; Chen, Huirong
2007-06-01
According to studying the coastline, a new way of multiple representations is put forward in the paper. That is stimulating human thinking way when they generalized, building the appropriate math model and describing the coastline with graphics, extracting all kinds of the coastline shape information. The coastline automatic generalization will be finished based on the knowledge rules and arithmetic operators. Showing the information of coastline shape by building the curve Douglas binary tree, it can reveal the shape character of coastline not only microcosmically but also macroscopically. Extracting the information of coastline concludes the local characteristic point and its orientation, the curve structure and the topology trait. The curve structure can be divided the single curve and the curve cluster. By confirming the knowledge rules of the coastline generalization, the generalized scale and its shape parameter, the coastline automatic generalization model is established finally. The method of the multiple scale representation of coastline in this paper has some strong points. It is human's thinking mode and can keep the nature character of the curve prototype. The binary tree structure can control the coastline comparability, avoid the self-intersect phenomenon and hold the unanimous topology relationship.
High resolution time interval counter
Condreva, Kenneth J.
1994-01-01
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.
High resolution time interval counter
Condreva, K.J.
1994-07-26
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Memristive effects in oxygenated amorphous carbon nanodevices
NASA Astrophysics Data System (ADS)
Bachmann, T. A.; Koelmans, W. W.; Jonnalagadda, V. P.; Le Gallo, M.; Santini, C. A.; Sebastian, A.; Eleftheriou, E.; Craciun, M. F.; Wright, C. D.
2018-01-01
Computing with resistive-switching (memristive) memory devices has shown much recent progress and offers an attractive route to circumvent the von-Neumann bottleneck, i.e. the separation of processing and memory, which limits the performance of conventional computer architectures. Due to their good scalability and nanosecond switching speeds, carbon-based resistive-switching memory devices could play an important role in this respect. However, devices based on elemental carbon, such as tetrahedral amorphous carbon or ta-C, typically suffer from a low cycling endurance. A material that has proven to be capable of combining the advantages of elemental carbon-based memories with simple fabrication methods and good endurance performance for binary memory applications is oxygenated amorphous carbon, or a-CO x . Here, we examine the memristive capabilities of nanoscale a-CO x devices, in particular their ability to provide the multilevel and accumulation properties that underpin computing type applications. We show the successful operation of nanoscale a-CO x memory cells for both the storage of multilevel states (here 3-level) and for the provision of an arithmetic accumulator. We implement a base-16, or hexadecimal, accumulator and show how such a device can carry out hexadecimal arithmetic and simultaneously store the computed result in the self-same a-CO x cell, all using fast (sub-10 ns) and low-energy (sub-pJ) input pulses.
Implicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast
Prather, Richard W.
2012-01-01
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic. PMID:23119101
Arithmetic Circuit Verification Based on Symbolic Computer Algebra
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo
This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.
The neural circuits for arithmetic principles.
Liu, Jie; Zhang, Han; Chen, Chuansheng; Chen, Hui; Cui, Jiaxin; Zhou, Xinlin
2017-02-15
Arithmetic principles are the regularities underlying arithmetic computation. Little is known about how the brain supports the processing of arithmetic principles. The current fMRI study examined neural activation and functional connectivity during the processing of verbalized arithmetic principles, as compared to numerical computation and general language processing. As expected, arithmetic principles elicited stronger activation in bilateral horizontal intraparietal sulcus and right supramarginal gyrus than did language processing, and stronger activation in left middle temporal lobe and left orbital part of inferior frontal gyrus than did computation. In contrast, computation elicited greater activation in bilateral horizontal intraparietal sulcus (extending to posterior superior parietal lobule) than did either arithmetic principles or language processing. Functional connectivity analysis with the psychophysiological interaction approach (PPI) showed that left temporal-parietal (MTG-HIPS) connectivity was stronger during the processing of arithmetic principle and language than during computation, whereas parietal-occipital connectivities were stronger during computation than during the processing of arithmetic principles and language. Additionally, the left fronto-parietal (orbital IFG-HIPS) connectivity was stronger during the processing of arithmetic principles than during computation. The results suggest that verbalized arithmetic principles engage a neural network that overlaps but is distinct from the networks for computation and language processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Specificity and Overlap in Skills Underpinning Reading and Arithmetical Fluency
ERIC Educational Resources Information Center
van Daal, Victor; van der Leij, Aryan; Ader, Herman
2013-01-01
The aim of this study was to examine unique and common causes of problems in reading and arithmetic fluency. 13- to 14-year-old students were placed into one of five groups: reading disabled (RD, n = 16), arithmetic disabled (AD, n = 34), reading and arithmetic disabled (RAD, n = 17), reading, arithmetic, and listening comprehension disabled…
2011-03-01
Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans. Commun...vol. 51, pp. 48-51, Jan. 2003. [99] C. Ding, M. Golin, and T. Klφve, “Meeting the Welch and Karystinos-Pados bounds on DS - CDMA binary signature sets...Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [100] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA
2015-07-09
49, pp. 873-885, Apr. 2003. [23] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary...bounds on DS - CDMA binary signature sets,” Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [25] V. P. Ipatov, “On the Karystinos-Pados...bounds and optimal binary DS - CDMA signature ensembles,” IEEE Commun. Letters, vol. 8, pp. 81-83, Feb. 2004. [26] G. N. Karystinos and D. A. Pados
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.
The Coding of Biological Information: From Nucleotide Sequence to Protein Recognition
NASA Astrophysics Data System (ADS)
Štambuk, Nikola
The paper reviews the classic results of Swanson, Dayhoff, Grantham, Blalock and Root-Bernstein, which link genetic code nucleotide patterns to the protein structure, evolution and molecular recognition. Symbolic representation of the binary addresses defining particular nucleotide and amino acid properties is discussed, with consideration of: structure and metric of the code, direct correspondence between amino acid and nucleotide information, and molecular recognition of the interacting protein motifs coded by the complementary DNA and RNA strands.
PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns
ERIC Educational Resources Information Center
Paterson, J. H.; Cheng, K. F.; Haddow, J.
2009-01-01
PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
ERIC Educational Resources Information Center
Zhang, Xiao; Räsänen, Pekka; Koponen, Tuire; Aunola, Kaisa; Lerkkanen, Marja-Kristiina; Nurmi, Jari-Erik
2017-01-01
The longitudinal relations of domain-general and numerical skills at ages 6-7 years to 3 cognitive domains of arithmetic learning, namely knowing (written computation), applying (arithmetic word problems), and reasoning (arithmetic reasoning) at age 11, were examined for a representative sample of 378 Finnish children. The results showed that…
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.
Huber, Maria; Kipman, Ulrike; Pletzer, Belinda
2014-07-01
The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.
Novel 3D Compression Methods for Geometry, Connectivity and Texture
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2016-06-01
A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.
Improved magnetic encoding device and method for making the same. [Patent application
Fox, R.J.
A magnetic encoding device and method for making the same are provided for use as magnetic storage media in identification control applications that give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications. 7 figures.
Method for making an improved magnetic encoding device
Fox, Richard J.
1981-01-01
A magnetic encoding device and method for making the same are provided for use as magnetic storage mediums in identification control applications which give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
Castro-Chavez, Fernando
2012-01-01
Background Three binary representations of the genetic code according to the ancient I Ching of Fu-Xi will be presented, depending on their defragging capabilities by pairing based on three biochemical properties of the nucleic acids: H-bonds, Purine/Pyrimidine rings, and the Keto-enol/Amino-imino tautomerism, yielding the last pair a 32/32 single-strand self-annealed genetic code and I Ching tables. Methods Our working tool is the ancient binary I Ching's resulting genetic code chromosomes defragged by vertical and by horizontal pairing, reverse engineered into non-binaries of 2D rotating 4×4×4 circles and 8×8 squares and into one 3D 100% symmetrical 16×4 tetrahedron coupled to a functional tetrahedron with apical signaling and central hydrophobicity (codon formula: 4[1(1)+1(3)+1(4)+4(2)]; 5:5, 6:6 in man) forming a stella octangula, and compared to Nirenberg's 16×4 codon table (1965) pairing the first two nucleotides of the 64 codons in axis y. Results One horizontal and one vertical defragging had the start Met at the center. Two, both horizontal and vertical pairings produced two pairs of 2×8×4 genetic code chromosomes naturally arranged (M and I), rearranged by semi-introversion of central purines or pyrimidines (M' and I') and by clustering hydrophobic amino acids; their quasi-identity was disrupted by amino acids with odd codons (Met and Tyr pairing to Ile and TGA Stop); in all instances, the 64-grid 90° rotational ability was restored. Conclusions We defragged three I Ching representations of the genetic code while emphasizing Nirenberg's historical finding. The synthetic genetic code chromosomes obtained reflect the protective strategy of enzymes with a similar function, having both humans and mammals a biased G-C dominance of three H-bonds in the third nucleotide of their most used codons per amino acid, as seen in one chromosome of the i, M and M' genetic codes, while a two H-bond A-T dominance was found in their complementary chromosome, as seen in invertebrates and plants. The reverse engineering of chromosome I' into 2D rotating circles and squares was undertaken, yielding a 100% symmetrical 3D geometry which was coupled to a previously obtained genetic code tetrahedron in order to differentiate the start methionine from the methionine that is acting as a codifying non-start codon. PMID:23431415
Berg, Derek H
2008-04-01
The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Holographic implementation of a binary associative memory for improved recognition
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.
1998-03-01
Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.
Documentation for the machine-readable character coded version of the SKYMAP catalogue
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1981-01-01
The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe
2018-06-01
The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.
Digital Controller For Emergency Beacon
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1990-01-01
Prototype digital controller intended for use in 406-MHz emergency beacon. Undergoing development according to international specifications, 406-MHz emergency beacon system includes satellites providing worldwide monitoring of beacons, with Doppler tracking to locate each beacon within 5 km. Controller turns beacon on and off and generates binary codes identifying source (e.g., ship, aircraft, person, or vehicle on land). Codes transmitted by phase modulation. Knowing code, monitor attempts to communicate with user, monitor uses code information to dispatch rescue team appropriate to type and locations of carrier.
Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter
2017-05-15
The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Evidence for a planetary mass third body orbiting the binary star KIC 5095269
NASA Astrophysics Data System (ADS)
Getley, A. K.; Carter, B.; King, R.; O'Toole, S.
2017-07-01
In this paper, we report the evidence for a planetary mass body orbiting the close binary star KIC 5095269. This detection arose from a search for eclipse timing variations amongst the more than 2000 eclipsing binaries observed by Kepler. Light curve and periodic eclipse time variations have been analysed using systemic and a custom Binary Eclipse Timings code based on the Transit Analysis Package which indicates a 7.70 ± 0.08MJup object orbiting every 237.7 ± 0.1 d around a 1.2 M⊙ primary and a 0.51 M⊙ secondary in an 18.6 d orbit. A dynamical integration over 107 yr suggests a stable orbital configuration. Radial velocity observations are recommended to confirm the properties of the binary star components and the planetary mass of the companion.
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2012-06-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is [Formula: see text] (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing [Formula: see text] unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio ( α:β ), with high probability, using Θ( α + β ) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85-94, 2009)-which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling . This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP '09, Springer-Verlag, pp 235-253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log( n )) and is optimal on the probabilistic tile assembly model (PTAM)- n being an upper bound on the dimensions of a rectangle.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Planet formation: is it good or bad to have a stellar companion?
NASA Astrophysics Data System (ADS)
Marzari, F.; Thebault, P.; Scholl, H.
2010-04-01
Planet formation in binary star systems is a complex issue due to the gravitational perturbations of the companion star. One of the crucial steps of the core-accretion model is planetesimal accretion into large protoplanets which finally coalesce into planets. In a planetesimal swarm surrounding the primary star, the average mutual impact velocity determines if larger bodies form or if the population is grinded down to dust, halting the planet formation process. This velocity is strongly influenced by the companion gravitational pull and by gas drag. The combined effect of these two forces may act in favour of or against planet formation, setting a lower or equal probability of the existence of extrasolar planets around single or binary stars. Planetesimal accretion in binaries has been studied so far with two different approaches. N-body codes based on the assumption that the disk is axisymmetric are very cost-effective since they allow the study of the mutual relative velocity with limited CPU usage. A large amount of planetesimal trajectories can be computed making it possible to outline the regions around the star where planet formation is possible. The main limitation of the N-body codes is the axisymmetric assumption. The companion perturbations affect not only the planetesimal orbits, but also the gaseous disk, by forcing spiral density waves. In addition, the overall shape of the disk changes from circular to elliptic. Hybrid codes have been recently developed which solve the equations for the disk with a hydrodynamical grid code and use the computed gas density and velocity vector to calculate an accurate value of the gas drag force on the planetesimals. These codes are more complex and may compute the trajectories of only a limited number of planetesimals.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-04-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-07-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Synthetic Survey of the Kepler Field
NASA Astrophysics Data System (ADS)
Wells, Mark; Prša, Andrej
2018-01-01
In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.
Common Envelope Light Curves. I. Grid-code Module Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M {sub ⊙} red giant branch star interacts with a 0.6 M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less
Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic
NASA Astrophysics Data System (ADS)
Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž
Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.
Deep classification hashing for person re-identification
NASA Astrophysics Data System (ADS)
Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang
2018-04-01
As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.
Getting Started in Classroom Computing.
ERIC Educational Resources Information Center
Ahl, David H.
Written for secondary students, this booklet provides an introduction to several computer-related concepts through a set of six classroom games, most of which can be played with little more than a sheet of paper and a pencil. The games are: 1) SECRET CODES--introduction to binary coding, punched cards, and paper tape; 2) GUESS--efficient methods…
2009-09-01
184 LEGALRESR Recode-Tab:[8] State of legal voting res 72 LITHO * Litho code 473 NOFVAPA* 42a. [42a] Not used FVAP tele:Did not know 363...471 INRECNO Master SCS ID number 472 LITHO Litho code 473 QCOMPF Binary variable indicating if case compl 474 QCOMPN [QCOMPN] Questions
Factors Affecting Code Status in a University Hospital Intensive Care Unit
ERIC Educational Resources Information Center
Van Scoy, Lauren Jodi; Sherman, Michael
2013-01-01
The authors collected data on diagnosis, hospital course, and end-of-life preparedness in patients who died in the intensive care unit (ICU) with "full code" status (defined as receiving cardiopulmonary resuscitation), compared with those who didn't. Differences were analyzed using binary and stepwise logistic regression. They found no…
Checking Equivalence of SPMD Programs Using Non-Interference
2010-01-29
with it hopes to go beyond the limits of Moore’s law, but also worries that programming will become harder [5]. One of the reasons why parallel...array name in G or L, and e is an arithmetic expression of integer type. In the CUDA code shown in Section 3, b and t are represented by coreId and...b+ t. A second, optimized version of the program (using function “reverse2”, see Section 3) can be modeled as a tuple P2 = ( G ,L2, F 2), with G same
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
Alternancia entre el estado de emisión de Rayos-X y Pulsar en Sistemas Binarios Interactuantes
NASA Astrophysics Data System (ADS)
De Vito, M. A.; Benvenuto, O. G.; Horvath, J. E.
2015-08-01
Redbacks belong to the family of binary systems in which one of the components is a pulsar. Recent observations show redbacks that have switched their state from pulsar - low mass companion (where the accretion of material over the pulsar has ceased) to low mass X-ray binary system (where emission is produced by the mass accretion on the pulsar), or inversely. The irradiation effect included in our models leads to cyclic mass transfer episodes, which allow close binary systems to switch between one state to other. We apply our results to the case of PSR J1723-2837, and discuss the need to include new ingredients in our code of binary evolution to describe the observed state transitions.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song
2017-01-01
Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191
Lonnemann, Jan; Linkersdörfer, Janosch; Hasselhorn, Marcus; Lindberg, Sven
2016-01-01
Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge. PMID:27630606
Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-01-01
Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Multifunction audio digitizer. [producing direct delta and pulse code modulation
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr. (Inventor)
1974-01-01
An illustrative embodiment of the invention includes apparatus which simultaneously produces both direct delta modulation and pulse code modulation. An input signal, after amplification, is supplied to a window comparator which supplies a polarity control signal to gate the output of a clock to the appropriate input of a binary up-down counter. The control signals provide direct delta modulation while the up-down counter output provides pulse code modulation.
ERIC Educational Resources Information Center
Rhodes, Katherine T.; Branum-Martin, Lee; Washington, Julie A.; Fuchs, Lynn S.
2017-01-01
Using multitrait, multimethod data, and confirmatory factor analysis, the current study examined the effects of arithmetic item formatting and the possibility that across formats, abilities other than arithmetic may contribute to children's answers. Measurement hypotheses were guided by several leading theories of arithmetic cognition. With a…
Personal Experience and Arithmetic Meaning in Semantic Dementia
ERIC Educational Resources Information Center
Julien, Camille L.; Neary, David; Snowden, Julie S.
2010-01-01
Arithmetic skills are generally claimed to be preserved in semantic dementia (SD), suggesting functional independence of arithmetic knowledge from other aspects of semantic memory. However, in a recent case series analysis we showed that arithmetic performance in SD is not entirely normal. The finding of a direct association between severity of…
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So
2003-11-20
We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less
Modeling the binary circumstellar medium of Type IIb/L/n supernova progenitors
NASA Astrophysics Data System (ADS)
Kolb, Christopher; Blondin, John; Borkowski, Kazik; Reynolds, Stephen
2018-01-01
Circumstellar interaction in close binary systems can produce a highly asymmetric environment, particularly for systems with a mass outflow velocity comparable to the binary orbital speed. This asymmetric circumstellar medium (CSM) becomes visible after a supernova explosion, when SN radiation illuminates the gas and when SN ejecta collide with the CSM. We aim to better understand the development of this asymmetric CSM, particularly for binary systems containing a red supergiant progenitor, and to study its impact on supernova morphology. To achieve this, we model the asymmetric wind and subsequent supernova explosion in full 3D hydrodynamics using the shock-capturing hydro code VH-1 on a spherical yin-yang grid. Wind interaction is computed in a frame co-rotating with the binary system, and gas is accelerated using a radiation pressure-driven wind model where optical depth of the radiative force is dependent on azimuthally-averaged gas density. We present characterization of our asymmetric wind density distribution model by fitting a polar-to-equatorial density contrast function to free parameters such as binary separation distance, primary mass loss rate, and binary mass ratio.
Black Hole Accretion Discs on a Moving Mesh
NASA Astrophysics Data System (ADS)
Ryan, Geoffrey
2017-01-01
We present multi-dimensional numerical simulations of black hole accretion disks relevant for the production of electromagnetic counterparts to gravitational wave sources. We perform these simulations with a new general relativistic version of the moving-mesh magnetohydrodynamics code DISCO which we will present. This open-source code, GR-DISCO uses an orbiting and shearing mesh which moves with the dominant flow velocity, greatly improving the numerical accuracy of the thermodynamic variables in supersonic flows while also reducing numerical viscosity and greatly increasing computational efficiency by allowing for a larger time step. We have used GR-DISCO to study black hole accretion discs subject to gravitational torques from a binary companion, relevant for both current and future supermassive binary black hole searches and also as a possible electromagnetic precursor mechanism for LIGO events. Binary torques in these discs excite spiral shockwaves which effectively transport angular momentum in the disc and propagate through the innermost stable orbit, leading to stress corresponding to an alpha-viscosity of 10-2. We also present three-dimensional GRMHD simulations of neutrino dominated accretion flows (NDAFs) occurring after a binary neutron star merger in order to elucidate the conditions for electromagnetic transient production accompanying these gravitational waves sources expected to be detected by LIGO in the near future.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
Early but not late blindness leads to enhanced arithmetic and working memory abilities.
Dormal, Valérie; Crollen, Virginie; Baumans, Christine; Lepore, Franco; Collignon, Olivier
2016-10-01
Behavioural and neurophysiological evidence suggest that vision plays an important role in the emergence and development of arithmetic abilities. However, how visual deprivation impacts on the development of arithmetic processing remains poorly understood. We compared the performances of early (EB), late blind (LB) and sighted control (SC) individuals during various arithmetic tasks involving addition, subtraction and multiplication of various complexities. We also assessed working memory (WM) performances to determine if they relate to a blind person's arithmetic capacities. Results showed that EB participants performed better than LB and SC in arithmetic tasks, especially in conditions in which verbal routines and WM abilities are needed. Moreover, EB participants also showed higher WM abilities. Together, our findings demonstrate that the absence of developmental vision does not prevent the development of refined arithmetic skills and can even trigger the refinement of these abilities in specific tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Long, Imogen; Malone, Stephanie A; Tolan, Anne; Burgoyne, Kelly; Heron-Delaney, Michelle; Witteveen, Kate; Hulme, Charles
2016-12-01
Following on from ideas developed by Gerstmann, a body of work has suggested that impairments in finger gnosis may be causally related to children's difficulties in learning arithmetic. We report a study with a large sample of typically developing children (N=197) in which we assessed finger gnosis and arithmetic along with a range of other relevant cognitive predictors of arithmetic skills (vocabulary, counting, and symbolic and nonsymbolic magnitude judgments). Contrary to some earlier claims, we found no meaningful association between finger gnosis and arithmetic skills. Counting and symbolic magnitude comparison were, however, powerful predictors of arithmetic skills, replicating a number of earlier findings. Our findings seriously question theories that posit either a simple association or a causal connection between finger gnosis and the development of arithmetic skills. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
[Acquisition of arithmetic knowledge].
Fayol, Michel
2008-01-01
The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).
The Development of Arithmetic Principle Knowledge: How Do We Know What Learners Know?
ERIC Educational Resources Information Center
Prather, Richard W.; Alibali, Martha W.
2009-01-01
This paper reviews research on learners' knowledge of three arithmetic principles: "Commutativity", "Relation to Operands", and "Inversion." Studies of arithmetic principle knowledge vary along several dimensions, including the age of the participants, the context in which the arithmetic is presented, and most importantly, the type of knowledge…
Close encounters of the third-body kind. [intruding bodies in binary star systems
NASA Technical Reports Server (NTRS)
Davies, M. B.; Benz, W.; Hills, J. G.
1994-01-01
We simulated encounters involving binaries of two eccentricities: e = 0 (i.e., circular binaries) and e = 0.5. In both cases the binary contained a point mass of 1.4 solar masses (i.e., a neutron star) and a 0.8 solar masses main-sequence star modeled as a polytrope. The semimajor axes of both binaries were set to 60 solar radii (0.28 AU). We considered intruders of three masses: 1.4 solar masses (a neutron star), 0.8 solar masses (a main-sequence star or a higher mass white dwarf), and 0.64 solar masses (a more typical mass white dwarf). Our strategy was to perform a large number (40,000) of encounters using a three-body code, then to rerun a small number of cases with a three-dimensional smoothed particle hydrodynamics (SPH) code to determine the importance of hydrodynamical effects. Using the results of the three-body runs, we computed the exchange across sections, sigma(sub ex). From the results of the SPH runs, we computed the cross sections for clean exchange, denoted by sigma(sub cx); the formation of a triple system, denoted by sigma(sub trp); and the formation of a merged binary with an object formed from the merger of two of the stars left in orbit around the third star, denoted by sigma(sub mb). For encounters between either binary and a 1.4 solar masses neutron star, sigma(sub cx) approx. 0.7 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 0.3 sigma(sub ex). For encounters between either binary and the 0.8 solar masses main-sequence star, sigma(sub cx) approx. 0.50 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.0 sigma(sub ex). If the main sequence star is replaced by a main-sequence star of the same mass, we have sigma(sub cx) approx. 0.5 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.6 sigma(sub ex). Although the exchange cross section is a sensitive function of intruder mass, we see that the cross section to produce merged binaries is roughly independent of intruder mass. The merged binaries produced have semi-major axes much larger than either those of the original binaries or those of binaries produced in clean exchanges. Coupled with their lower kick velocities, received from the encounters, their larger size will enhance their cross section, shortening the waiting time to a subsequent encounter with another single star.
How to interpret cognitive training studies: A reply to Lindskog & Winman
Park, Joonkoo; Brannon, Elizabeth M.
2017-01-01
In our previous studies, we demonstrated that repeated training on an approximate arithmetic task selectively improves symbolic arithmetic performance (Park & Brannon, 2013, 2014). We proposed that mental manipulation of quantity is the common cognitive component between approximate arithmetic and symbolic arithmetic, driving the causal relationship between the two. In a commentary to our work, Lindskog and Winman argue that there is no evidence of performance improvement during approximate arithmetic training and that this challenges the proposed causal relationship between approximate arithmetic and symbolic arithmetic. Here, we argue that causality in cognitive training experiments is interpreted from the selectivity of transfer effects and does not hinge upon improved performance in the training task. This is because changes in the unobservable cognitive elements underlying the transfer effect may not be observable from performance measures in the training task. We also question the validity of Lindskog and Winman’s simulation approach for testing for a training effect, given that simulations require a valid and sufficient model of a decision process, which is often difficult to achieve. Finally we provide an empirical approach to testing the training effects in adaptive training. Our analysis reveals new evidence that approximate arithmetic performance improved over the course of training in Park and Brannon (2014). We maintain that our data supports the conclusion that approximate arithmetic training leads to improvement in symbolic arithmetic driven by the common cognitive component of mental quantity manipulation. PMID:26972469
The neural correlates of mental arithmetic in adolescents: a longitudinal fNIRS study.
Artemenko, Christina; Soltanlou, Mojtaba; Ehlis, Ann-Christine; Nuerk, Hans-Christoph; Dresler, Thomas
2018-03-10
Arithmetic processing in adults is known to rely on a frontal-parietal network. However, neurocognitive research focusing on the neural and behavioral correlates of arithmetic development has been scarce, even though the acquisition of arithmetic skills is accompanied by changes within the fronto-parietal network of the developing brain. Furthermore, experimental procedures are typically adjusted to constraints of functional magnetic resonance imaging, which may not reflect natural settings in which children and adolescents actually perform arithmetic. Therefore, we investigated the longitudinal neurocognitive development of processes involved in performing the four basic arithmetic operations in 19 adolescents. By using functional near-infrared spectroscopy, we were able to use an ecologically valid task, i.e., a written production paradigm. A common pattern of activation in the bilateral fronto-parietal network for arithmetic processing was found for all basic arithmetic operations. Moreover, evidence was obtained for decreasing activation during subtraction over the course of 1 year in middle and inferior frontal gyri, and increased activation during addition and multiplication in angular and middle temporal gyri. In the self-paced block design, parietal activation in multiplication and left angular and temporal activation in addition were observed to be higher for simple than for complex blocks, reflecting an inverse effect of arithmetic complexity. In general, the findings suggest that the brain network for arithmetic processing is already established in 12-14 year-old adolescents, but still undergoes developmental changes.
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers
Szkudlarek, Emily; Brannon, Elizabeth M.
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children (n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills. PMID:29867624
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers.
Szkudlarek, Emily; Brannon, Elizabeth M
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children ( n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills.
Bondi-Hoyle-Lyttleton Accretion onto Binaries
NASA Astrophysics Data System (ADS)
Antoni, Andrea; MacLeod, Morgan; Ramírez-Ruiz, Enrico
2018-01-01
Binary stars are not rare. While only close binary stars will eventually interact with one another, even the widest binary systems interact with their gaseous surroundings. The rates of accretion and the gaseous drag forces arising in these interactions are the key to understanding how these systems evolve. This poster examines accretion flows around a binary system moving supersonically through a background gas. We perform three-dimensional hydrodynamic simulations of Bondi-Hoyle-Lyttleton accretion using the adaptive mesh refinement code FLASH. We simulate a range of values of semi-major axis of the orbit relative to the gravitational focusing impact parameter of the pair. On large scales, gas is gravitationally focused by the center-of-mass of the binary, leading to dynamical friction drag and to the accretion of mass and momentum. On smaller scales, the orbital motion imprints itself on the gas. Notably, the magnitude and direction of the forces acting on the binary inherit this orbital dependence. The long-term evolution of the binary is determined by the timescales for accretion, slow down of the center-of-mass, and decay of the orbit. We use our simulations to measure these timescales and to establish a hierarchy between them. In general, our simulations indicate that binaries moving through gaseous media will slow down before the orbit decays.
COSMIC probes into compact binary formation and evolution
NASA Astrophysics Data System (ADS)
Breivik, Katelyn
2018-01-01
The population of compact binaries in the galaxy represents the final state of all binaries that have lived up to the present epoch. Compact binaries present a unique opportunity to probe binary evolution since many of the interactions binaries experience can be imprinted on the compact binary population. By combining binary evolution simulations with catalogs of observable compact binary systems, we can distill the dominant physical processes that govern binary star evolution, as well as predict the abundance and variety of their end products.The next decades herald a previously unseen opportunity to study compact binaries. Multi-messenger observations from telescopes across all wavelengths and gravitational-wave observatories spanning several decades of frequency will give an unprecedented view into the structure of these systems and the composition of their components. Observations will not always be coincident and in some cases may be separated by several years, providing an avenue for simulations to better constrain binary evolution models in preparation for future observations.I will present the results of three population synthesis studies of compact binary populations carried out with the Compact Object Synthesis and Monte Carlo Investigation Code (COSMIC). I will first show how binary-black-hole formation channels can be understood with LISA observations. I will then show how the population of double white dwarfs observed with LISA and Gaia could provide a detailed view of mass transfer and accretion. Finally, I will show that Gaia could discover thousands black holes in the Milky Way through astrometric observations, yielding view into black-hole astrophysics that is complementary to and independent from both X-ray and gravitational-wave astronomy.
Probing the Milky Way electron density using multi-messenger astronomy
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane
2015-04-01
Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
ERIC Educational Resources Information Center
Hitt, Fernando; Saboya, Mireille; Cortés Zavala, Carlos
2016-01-01
This paper presents an experiment that attempts to mobilise an arithmetic-algebraic way of thinking in order to articulate between arithmetic thinking and the early algebraic thinking, which is considered a prelude to algebraic thinking. In the process of building this latter way of thinking, researchers analysed pupils' spontaneous production…
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
Amalian, Jean-Arthur; Trinh, Thanh Tam; Lutz, Jean-François; Charles, Laurence
2016-04-05
Tandem mass spectrometry was evaluated as a reliable sequencing methodology to read codes encrypted in monodisperse sequence-coded oligo(triazole amide)s. The studied oligomers were composed of monomers containing a triazole ring, a short ethylene oxide segment, and an amide group as well as a short alkyl chain (propyl or isobutyl) which defined the 0/1 molecular binary code. Using electrospray ionization, oligo(triazole amide)s were best ionized as protonated molecules and were observed to adopt a single charge state, suggesting that adducted protons were located on every other monomer unit. Upon collisional activation, cleavages of the amide bond and of one ether bond were observed to proceed in each monomer, yielding two sets of complementary product ions. Distribution of protons over the precursor structure was found to remain unchanged upon activation, allowing charge state to be anticipated for product ions in the four series and hence facilitating their assignment for a straightforward characterization of any encoded oligo(triazole amide)s.
I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.
Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S
2017-12-01
The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.
δ Scuti-type pulsation in the hot component of the Algol-type binary system BG Peg
NASA Astrophysics Data System (ADS)
Şenyüz, T.; Soydugan, E.
2014-02-01
In this study, 23 Algol-type binary systems, which were selected as candidate binaries with pulsating components, were observed at the Çanakkale Onsekiz Mart University Observatory. One of these systems was BG Peg. Its hotter component shows δ Scuti-type light variations. Physical parameters of BG Peg were derived from modelling the V light curve using the Wilson-Devinney code. The frequency analysis shows that the pulsational component of the BG Peg system pulsates in two modes with periods of 0.039 and 0.047 d. Mode identification indicates that both modes are most likely non-radial l = 2 modes.
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
NASA Astrophysics Data System (ADS)
Dogan, Suzan
2016-07-01
Accretion discs are common in binary systems, and they are often found to be misaligned with respect to the binary orbit. The gravitational torque from a companion induces nodal precession in misaligned disc orbits. In this study, we first calculate whether this precession is strong enough to overcome the internal disc torques communicating angular momentum. We compare the disc precession torque with the disc viscous torque to determine whether the disc should warp or break. For typical parameters precession wins: the disc breaks into distinct planes that precess effectively independently. To check our analytical findings, we perform 3D hydrodynamical numerical simulations using the PHANTOM smoothed particle hydrodynamics code, and confirm that disc breaking is widespread and enhances accretion on to the central object. For some inclinations, the disc goes through strong Kozai cycles. Disc breaking promotes markedly enhanced and variable accretion and potentially produces high-energy particles or radiation through shocks. This would have significant implications for all binary systems: e.g. accretion outbursts in X-ray binaries and fuelling supermassive black hole (SMBH) binaries. The behaviour we have discussed in this work is relevant to a variety of astrophysical systems, for example X-ray binaries, where the disc plane may be tilted by radiation warping, SMBH binaries, where accretion of misaligned gas can create effectively random inclinations and protostellar binaries, where a disc may be misaligned by a variety of effects such as binary capture/exchange, accretion after binary formation.
Fehr, Thorsten; Code, Chris; Herrmann, Manfred
2007-10-03
The issue of how and where arithmetic operations are represented in the brain has been addressed in numerous studies. Lesion studies suggest that a network of different brain areas are involved in mental calculation. Neuroimaging studies have reported inferior parietal and lateral frontal activations during mental arithmetic using tasks of different complexities and using different operators (addition, subtraction, etc.). Indeed, it has been difficult to compare brain activation across studies because of the variety of different operators and different presentation modalities used. The present experiment examined fMRI-BOLD activity in participants during calculation tasks entailing different arithmetic operations -- addition, subtraction, multiplication and division -- of different complexities. Functional imaging data revealed a common activation pattern comprising right precuneus, left and right middle and superior frontal regions during all arithmetic operations. All other regional activations were operation specific and distributed in prominently frontal, parietal and central regions when contrasting complex and simple calculation tasks. The present results largely confirm former studies suggesting that activation patterns due to mental arithmetic appear to reflect a basic anatomical substrate of working memory, numerical knowledge and processing based on finger counting, and derived from a network originally related to finger movement. We emphasize that in mental arithmetic research different arithmetic operations should always be examined and discussed independently of each other in order to avoid invalid generalizations on arithmetics and involved brain areas.
Cui, Jiaxin; Georgiou, George K; Zhang, Yiyun; Li, Yixun; Shu, Hua; Zhou, Xinlin
2017-02-01
Rapid automatized naming (RAN) has been found to predict mathematics. However, the nature of their relationship remains unclear. Thus, the purpose of this study was twofold: (a) to examine how RAN (numeric and non-numeric) predicts a subdomain of mathematics (arithmetic fluency) and (b) to examine what processing skills may account for the RAN-arithmetic fluency relationship. A total of 160 third-year kindergarten Chinese children (83 boys and 77 girls, mean age=5.11years) were assessed on RAN (colors, objects, digits, and dice), nonverbal IQ, visual-verbal paired associate learning, phonological awareness, short-term memory, speed of processing, approximate number system acuity, and arithmetic fluency (addition and subtraction). The results indicated first that RAN was a significant correlate of arithmetic fluency and the correlations did not vary as a function of type of RAN or arithmetic fluency tasks. In addition, RAN continued to predict addition and subtraction fluency even after controlling for all other processing skills. Taken together, these findings challenge the existing theoretical accounts of the RAN-arithmetic fluency relationship and suggest that, similar to reading fluency, multiple processes underlie the RAN-arithmetic fluency relationship. Copyright © 2016 Elsevier Inc. All rights reserved.
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
Soft decoding a self-dual (48, 24; 12) code
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Elder, D
1984-06-07
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
2006-06-01
called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus
The design of the CMOS wireless bar code scanner applying optical system based on ZigBee
NASA Astrophysics Data System (ADS)
Chen, Yuelin; Peng, Jian
2008-03-01
The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.
The fourfold way of the genetic code.
Jiménez-Montaño, Miguel Angel
2009-11-01
We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.
Design of permanent magnet synchronous motor speed control system based on SVPWM
NASA Astrophysics Data System (ADS)
Wu, Haibo
2017-04-01
The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.
The CSM testbed matrix processors internal logic and dataflow descriptions
NASA Technical Reports Server (NTRS)
Regelbrugge, Marc E.; Wright, Mary A.
1988-01-01
This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.
Import Manipulate Plot RELAP5/MOD3 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, K. R.
1999-10-05
XMGR5 was derived from an XY plotting tool called ACE/gr, which is copyrighted by Paul J. Turner and in the public domain. The interactive version of ACE/GR is xmgr, and includes a graphical interface to the X-windows system. Enhancements to xmgr have been developed which import, manipualate, and plot data from RELAP/MOD3, MELCOR, FRAPCON, and SINDA codes, and NRC databank files. capabilities, include two-phase property table lookup functions, an equation interpreter, arithmetic library functions, and units conversion. Plot titles, labels, legends, and narrative can be displayed using Latin or Cyrillic alphabets.
SPAR improved structural-fluid dynamic analysis capability
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1985-01-01
The results of a study whose objective was to improve the operation of the SPAR computer code by improving efficiency, user features, and documentation is presented. Additional capability was added to the SPAR arithmetic utility system, including trigonometric functions, numerical integration, interpolation, and matrix combinations. Improvements were made in the EIG processor. A processor was created to compute and store principal stresses in table-format data sets. An additional capability was developed and incorporated into the plot processor which permits plotting directly from table-format data sets. Documentation of all these features is provided in the form of updates to the SPAR users manual.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Li, Yongxin; Hu, Yuzheng; Wang, Yunqi; Weng, Jian; Chen, Feiyan
2013-01-01
Arithmetic skill is of critical importance for academic achievement, professional success and everyday life, and childhood is the key period to acquire this skill. Neuroimaging studies have identified that left parietal regions are a key neural substrate for representing arithmetic skill. Although the relationship between functional brain activity in left parietal regions and arithmetic skill has been studied in detail, it remains unclear about the relationship between arithmetic achievement and structural properties in left inferior parietal area in schoolchildren. The current study employed a combination of voxel-based morphometry (VBM) for high-resolution T1-weighted images and fiber tracking on diffusion tensor imaging (DTI) to examine the relationship between structural properties in the inferior parietal area and arithmetic achievement in 10-year-old schoolchildren. VBM of the T1-weighted images revealed that individual differences in arithmetic scores were significantly and positively correlated with the gray matter (GM) volume in the left intraparietal sulcus (IPS). Fiber tracking analysis revealed that the forceps major, left superior longitudinal fasciculus (SLF), bilateral inferior longitudinal fasciculus (ILF) and inferior fronto-occipital fasciculus (IFOF) were the primary pathways connecting the left IPS with other brain areas. Furthermore, the regression analysis of the probabilistic pathways revealed a significant and positive correlation between the fractional anisotropy (FA) values in the left SLF, ILF and bilateral IFOF and arithmetic scores. The brain structure-behavior correlation analyses indicated that the GM volumes in the left IPS and the FA values in the tract pathways connecting left IPS were both related to children's arithmetic achievement. The present findings provide evidence that individual structural differences in the left IPS are associated with arithmetic scores in schoolchildren. PMID:24367320
NASA Astrophysics Data System (ADS)
Choudhary, Kuldeep; Kumar, Santosh
2017-05-01
The application of electro-optic effect in lithium-niobate-based Mach-Zehnder interferometer to design a 3-bit optical pseudorandom binary sequence (PRBS) generator has been proposed, which is characterized by its simplicity of generation and stability. The proposed device is optoelectronic in nature. The PBRS generator is immensely applicable for pattern generation, encryption, and coding applications in optical networks. The study is carried out by simulating the proposed device with beam propagation method.
The progenitors of supernovae Type Ia
NASA Astrophysics Data System (ADS)
Toonen, Silvia
2014-09-01
Despite the significance of Type Ia supernovae (SNeIa) in many fields in astrophysics, SNeIa lack a theoretical explanation. SNeIa are generally thought to be thermonuclear explosions of carbon/oxygen (CO) white dwarfs (WDs). The canonical scenarios involve white dwarfs reaching the Chandrasekhar mass, either by accretion from a non-degenerate companion (single-degenerate channel, SD) or by a merger of two CO WDs (double-degenerate channel, DD). The study of SNeIa progenitors is a very active field of research for binary population synthesis (BPS) studies. The strength of the BPS approach is to study the effect of uncertainties in binary evolution on the macroscopic properties of a binary population, in order to constrain binary evolutionary processes. I will discuss the expected SNeIa rate from the BPS approach and the uncertainties in their progenitor evolution, and compare with current observations. I will also discuss the results of the POPCORN project in which four BPS codes were compared to better understand the differences in the predicted SNeIa rate of the SD channel. The goal of this project is to investigate whether differences in the simulated populations are due to numerical effects or whether they can be explained by differences in the input physics. I will show which assumptions in BPS codes affect the results most and hence should be studied in more detail.
Period variation studies of six contact binaries in M4
NASA Astrophysics Data System (ADS)
Rukmini, Jagirdar; Shanti Priya, Devarapalli
2018-04-01
We present the first period study of six contact binaries in the closest globular cluster M4 the data collected from June 1995‑June 2009 and Oct 2012‑Sept 2013. New times of minima are determined for all the six variables and eclipse timing (O-C) diagrams along with the quadratic fit are presented. For all the variables, the study of (O-C) variations reveals changes in the periods. In addition, the fundamental parameters for four of the contact binaries obtained using the Wilson-Devinney code (v2003) are presented. Planned observations of these binaries using the 3.6-m Devasthal Optical Telescope (DOT) and the 4-m International Liquid Mirror Telescope (ILMT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES; Nainital) can throw light on their evolutionary status from long term period variation studies.
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-04-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-07-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.
Hinault, T; Lemaire, P
2016-01-01
In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.
Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code
Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...
2015-06-03
Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less
ERIC Educational Resources Information Center
Berg, Derek H.; Hutchinson, Nancy L.
2010-01-01
This study investigated whether processing speed, short-term memory, and working memory accounted for the differential mental addition fluency between children typically achieving in arithmetic (TA) and children at-risk for failure in arithmetic (AR). Further, we drew attention to fluency differences in simple (e.g., 5 + 3) and complex (e.g., 16 +…
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
Exploring hurdles to transfer : student experiences of applying knowledge across disciplines
NASA Astrophysics Data System (ADS)
Lappalainen, Jouni; Rosqvist, Juho
2015-04-01
This paper explores the ways students perceive the transfer of learned knowledge to new situations - often a surprisingly difficult prospect. The novel aspect compared to the traditional transfer studies is that the learning phase is not a part of the experiment itself. The intention was only to activate acquired knowledge relevant to the transfer target using a short primer immediately prior to the situation where the knowledge was to be applied. Eight volunteer students from either mathematics or computer science curricula were given a task of designing an adder circuit using logic gates: a new context in which to apply knowledge of binary arithmetic and Boolean algebra. The results of a phenomenographic classification of the views presented by the students in their post-experiment interviews are reported. The degree to which the students were conscious of the acquired knowledge they employed and how they applied it in a new context emerged as the differentiating factors.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
NASA Astrophysics Data System (ADS)
Wang, Li-Qun; Saito, Masao
We used 1.5T functional magnetic resonance imaging (fMRI) to explore that which brain areas contribute uniquely to numeric computation. The BOLD effect activation pattern of metal arithmetic task (successive subtraction: actual calculation task) was compared with multiplication tables repetition task (rote verbal arithmetic memory task) response. The activation found in right parietal lobule during metal arithmetic task suggested that quantitative cognition or numeric computation may need the assistance of sensuous convert, such as spatial imagination and spatial sensuous convert. In addition, this mechanism may be an ’analog algorithm’ in the simple mental arithmetic processing.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State
NASA Astrophysics Data System (ADS)
Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan
2017-01-01
I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.
Moll, Kristina; Snowling, Margaret J.; Göbel, Silke M.; Hulme, Charles
2015-01-01
Two important foundations for learning are language and executive skills. Data from a longitudinal study tracking the development of 93 children at family-risk of dyslexia and 76 controls was used to investigate the influence of these skills on the development of arithmetic. A two-group longitudinal path model assessed the relationships between language and executive skills at 3–4 years, verbal number skills (counting and number knowledge) and phonological processing skills at 4–5 years, and written arithmetic in primary school. The same cognitive processes accounted for variability in arithmetic skills in both groups. Early language and executive skills predicted variations in preschool verbal number skills, which in turn, predicted arithmetic skills in school. In contrast, phonological awareness was not a predictor of later arithmetic skills. These results suggest that verbal and executive processes provide the foundation for verbal number skills, which in turn influence the development of formal arithmetic skills. Problems in early language development may explain the comorbidity between reading and mathematics disorder. PMID:26412946
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Visualising interacting binaries in 3D
NASA Astrophysics Data System (ADS)
Hynes, R. I.
2002-01-01
I have developed a code which allows images to be produced of a variety of interacting binaries for any system parameters. The resulting images are not only helpful in visualising the geometry of a given system but are also helpful in talks and educational work. I would like to acknowledge financial support from the Leverhulme Trust, and to thank Dan Rolfe for many discussions on how to represent interacting binaries and the users of BinSim who have provided valuable testing and feedback. BinSim would not have been possible without the efforts of Brian Paul and others responsible for the Mesa 3-D graphics library -- Mesa 3-D graphics .
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
Teachers’ Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers’ beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students’ performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers’ scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices. PMID:27799917
Teachers' Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic.
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers' beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students' performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers' scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices.
EXODUS II: A finite element data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoof, L.A.; Yarberry, V.R.
1994-09-01
EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).
NASA Technical Reports Server (NTRS)
1972-01-01
Here, the 7400 line of transistor to transistor logic (TTL) devices is emphasized almost exclusively where hardware is concerned. However, it should be pointed out that the logic theory contained herein applies to all hardware. Binary numbers, simplification of logic circuits, code conversion circuits, basic flip-flop theory, details about series 54/7400, and asynchronous circuits are discussed.
Glynn, P.D.
1991-01-01
The computer code MBSSAS uses two-parameter Margules-type excess-free-energy of mixing equations to calculate thermodynamic equilibrium, pure-phase saturation, and stoichiometric saturation states in binary solid-solution aqueous-solution (SSAS) systems. Lippmann phase diagrams, Roozeboom diagrams, and distribution-coefficient diagrams can be constructed from the output data files, and also can be displayed by MBSSAS (on IBM-PC compatible computers). MBSSAS also will calculate accessory information, such as the location of miscibility gaps, spinodal gaps, critical-mixing points, alyotropic extrema, Henry's law solid-phase activity coefficients, and limiting distribution coefficients. Alternatively, MBSSAS can use such information (instead of the Margules, Guggenheim, or Thompson and Waldbaum excess-free-energy parameters) to calculate the appropriate excess-free-energy of mixing equation for any given SSAS system. ?? 1991.
Extra Solar Planet Science With a Non Redundant Mask
NASA Astrophysics Data System (ADS)
Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi
2017-01-01
To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.
Analysis of the possibility of using G.729 codec for steganographic transmission
NASA Astrophysics Data System (ADS)
Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław
2017-04-01
Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
A Chemical Alphabet for Macromolecular Communications.
Giannoukos, Stamatios; McGuiness, Daniel Tunç; Marshall, Alan; Smith, Jeremy; Taylor, Stephen
2018-06-08
Molecular communications in macroscale environments is an emerging field of study driven by the intriguing prospect of sending coded information over olfactory networks. For the first time, this article reports two signal modulation techniques (on-off keying-OOK, and concentration shift keying-CSK) which have been used to encode and transmit digital information using odors over distances of 1-4 m. Molecular transmission of digital data was experimentally investigated for the letter "r" with a binary value of 01110010 (ASCII) for a gas stream network channel (up to 4 m) using mass spectrometry (MS) as the main detection-decoding system. The generation and modulation of the chemical signals was achieved using an automated odor emitter (OE) which is based on the controlled evaporation of a chemical analyte and its diffusion into a carrier gas stream. The chemical signals produced propagate within a confined channel to reach the demodulator-MS. Experiments were undertaken for a range of volatile organic compounds (VOCs) with different diffusion coefficient values in air at ambient conditions. Representative compounds investigated include acetone, cyclopentane, and n-hexane. For the first time, the binary code ASCII (American Standard Code for Information Interchange) is combined with chemical signaling to generate a molecular representation of the English alphabet. Transmission experiments of fixed-width molecular signals corresponding to letters of the alphabet over varying distances are shown. A binary message corresponding to the word "ion" was synthesized using chemical signals and transmitted within a physical channel over a distance of 2 m.
The modelling of heat, mass and solute transport in solidification systems
NASA Technical Reports Server (NTRS)
Voller, V. R.; Brent, A. D.; Prakash, C.
1989-01-01
The aim of this paper is to explore the range of possible one-phase models of binary alloy solidification. Starting from a general two-phase description, based on the two-fluid model, three limiting cases are identified which result in one-phase models of binary systems. Each of these models can be readily implemented in standard single phase flow numerical codes. Differences between predictions from these models are examined. In particular, the effects of the models on the predicted macro-segregation patterns are evaluated.
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 04-10-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 12 May 2014 to 11 May 2016 4. TITLE ...industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...virus software based on binary signatures. A popular method in industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we
Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.
Coding and decoding in a point-to-point communication using the polarization of the light beam.
Kavehvash, Z; Massoumian, F
2008-05-10
A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
Price, Gavin R; Yeo, Darren J; Wilkey, Eric D; Cutting, Laurie E
2018-04-01
The present study investigates the relation between resting-state functional connectivity (rsFC) of cytoarchitectonically defined subdivisions of the parietal cortex at the end of 1st grade and arithmetic performance at the end of 2nd grade. Results revealed a dissociable pattern of relations between rsFC and arithmetic competence among subdivisions of intraparietal sulcus (IPS) and angular gyrus (AG). rsFC between right hemisphere IPS subdivisions and contralateral IPS subdivisions positively correlated with arithmetic competence. In contrast, rsFC between the left hIP1 and the right medial temporal lobe, and rsFC between the left AG and left superior frontal gyrus, were negatively correlated with arithmetic competence. These results suggest that strong inter-hemispheric IPS connectivity is important for math development, reflecting either neurocognitive mechanisms specific to arithmetic processing, domain-general mechanisms that are particularly relevant to arithmetic competence, or structural 'cortical maturity'. Stronger connectivity between IPS, and AG, subdivisions and frontal and temporal cortices, however, appears to be negatively associated with math development, possibly reflecting the ability to disengage suboptimal problem-solving strategies during mathematical processing, or to flexibly reorient task-based networks. Importantly, the reported results pertain even when controlling for reading, spatial attention, and working memory, suggesting that the observed rsFC-behavior relations are specific to arithmetic competence. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
P-Code-Enhanced Encryption-Mode Processing of GPS Signals
NASA Technical Reports Server (NTRS)
Young, Lawrence; Meehan, Thomas; Thomas, Jess B.
2003-01-01
A method of processing signals in a Global Positioning System (GPS) receiver has been invented to enable the receiver to recover some of the information that is otherwise lost when GPS signals are encrypted at the transmitters. The need for this method arises because, at the option of the military, precision GPS code (P-code) is sometimes encrypted by a secret binary code, denoted the A code. Authorized users can recover the full signal with knowledge of the A-code. However, even in the absence of knowledge of the A-code, one can track the encrypted signal by use of an estimate of the A-code. The present invention is a method of making and using such an estimate. In comparison with prior such methods, this method makes it possible to recover more of the lost information and obtain greater accuracy.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Rajasekaran, Sanguthevar
2013-01-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is Ω(log(n)log(log(n))) (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing Θ(log(n)log(log(n))) unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio (α:β), with high probability, using Θ(α + β) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85–94, 2009)—which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling. This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP ’09, Springer-Verlag, pp 235–253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log(n)) and is optimal on the probabilistic tile assembly model (PTAM)—n being an upper bound on the dimensions of a rectangle. PMID:24311993
Asymmetric distances for binary embeddings.
Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana
2014-01-01
In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.
NASA Astrophysics Data System (ADS)
Hedlund, Anne; Sandquist, Eric L.; Arentoft, Torben; Brogaard, Karsten; Grundahl, Frank; Stello, Dennis; Bedin, Luigi R.; Libralato, Mattia; Malavolta, Luca; Nardiello, Domenico; Molenda-Zakowicz, Joanna; Vanderburg, Andrew
2018-06-01
V1178 Tau is a double-lined spectroscopic eclipsing binary in NGC1817, one of the more massive clusters observed in the K2 mission. We have determined the orbital period (P = 2.20 d) for the first time, and we model radial velocity measurements from the HARPS and ALFOSC spectrographs, light curves collected by Kepler, and ground based light curves using the Eclipsing Light Curve code (ELC, Orosz & Hauschildt 2000). We present masses and radii for the stars in the binary, allowing for a reddening-independent means of determining the cluster age. V1178 Tau is particularly useful for calculating the age of the cluster because the stars are close to the cluster turnoff, providing a more precise age determination. Furthermore, because one of the stars in the binary is a delta Scuti variable, the analysis provides improved insight into their pulsations.
Topology of black hole binary-single interactions
NASA Astrophysics Data System (ADS)
Samsing, Johan; Ilan, Teva
2018-05-01
We present a study on how the outcomes of binary-single interactions involving three black holes (BHs) distribute as a function of the initial conditions; a distribution we refer to as the topology. Using a N-body code that includes BH finite sizes and gravitational wave (GW) emission in the equation of motion (EOM), we perform more than a million binary-single interactions to explore the topology of both the Newtonian limit and the limit at which general relativistic (GR) effects start to become important. From these interactions, we are able to describe exactly under which conditions BH collisions and eccentric GW capture mergers form, as well as how GR in general modifies the Newtonian topology. This study is performed on both large- and microtopological scales. We further describe how the inclusion of GW emission in the EOM naturally leads to scenarios where the binary-single system undergoes two successive GW mergers.
Ffuzz: Towards full system high coverage fuzz testing on binary executables.
Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.
The Effects of Single and Close Binary Evolution on the Stellar Mass Function
NASA Astrophysics Data System (ADS)
Schneider, R. N. F.; Izzard, G. R.; de Mink, S.; Langer, N., Stolte, A., de Koter, A.; Gvaramadze, V. V.; Hussmann, B.; Liermann, A.; Sana, H.
2013-06-01
Massive stars are almost exclusively born in star clusters, where stars in a cluster are expected to be born quasi-simultaneously and with the same chemical composition. The distribution of their birth masses favors lower over higher stellar masses, such that the most massive stars are rare, and the existence of an stellar upper mass limit is still debated. The majority of massive stars are born as members of close binary systems and most of them will exchange mass with a close companion during their lifetime. We explore the influence of single and binary star evolution on the high mass end of the stellar mass function using a rapid binary evolution code. We apply our results to two massive Galactic star clusters and show how the shape of their mass functions can be used to determine cluster ages and comment on the stellar upper mass limit in view of our new findings.
Morsanyi, Kinga; O'Mahony, Eileen; McCormack, Teresa
2017-12-01
Recent evidence has highlighted the important role that number-ordering skills play in arithmetic abilities, both in children and adults. In the current study, we demonstrated that number comparison and ordering skills were both significantly related to arithmetic performance in adults, and the effect size was greater in the case of ordering skills. Additionally, we found that the effect of number comparison skills on arithmetic performance was mediated by number-ordering skills. Moreover, performance on comparison and ordering tasks involving the months of the year was also strongly correlated with arithmetic skills, and participants displayed similar (canonical or reverse) distance effects on the comparison and ordering tasks involving months as when the tasks included numbers. This suggests that the processes responsible for the link between comparison and ordering skills and arithmetic performance are not specific to the domain of numbers. Finally, a factor analysis indicated that performance on comparison and ordering tasks loaded on a factor that included performance on a number line task and self-reported spatial thinking styles. These results substantially extend previous research on the role of order processing abilities in mental arithmetic.
Cognitive precursors of arithmetic development in primary school children with cerebral palsy.
Van Rooijen, M; Verhoeven, L; Smits, D W; Dallmeijer, A J; Becher, J G; Steenbergen, B
2014-04-01
The aim of this study was to examine the development of arithmetic performance and its cognitive precursors in children with CP from 7 till 9 years of age. Previous research has shown that children with CP are generally delayed in arithmetic performance compared to their typically developing peers. In children with CP, the developmental trajectory of the ability to solve addition- and subtraction tasks has, however, rarely been studied, as well as the cognitive factors affecting this trajectory. Sixty children (M=7.2 years, SD=.23 months at study entry) with CP participated in this study. Standardized tests were administered to assess arithmetic performance, word decoding skills, non-verbal intelligence, and working memory. The results showed that the ability to solve addition- and subtraction tasks increased over a two year period. Word decoding skills were positively related to the initial status of arithmetic performance. In addition, non-verbal intelligence and working memory were associated with the initial status and growth rate of arithmetic performance from 7 till 9 years of age. The current study highlights the importance of non-verbal intelligence and working memory to the development of arithmetic performance of children with CP. Copyright © 2014 Elsevier Ltd. All rights reserved.
Separating stages of arithmetic verification: An ERP study with a novel paradigm.
Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes
2015-08-01
In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simulated Assessment of Interference Effects in Direct Sequence Spread Spectrum (DSSS) QPSK Receiver
2014-03-27
bit error rate BPSK binary phase shift keying CDMA code division multiple access CSI comb spectrum interference CW continuous wave DPSK differential... CDMA ) and GPS systems which is a Gold code. This code is generated by a modulo-2 operation between two different preferred m-sequences. The preferred m...10 SNR Sim (dB) S N R O ut ( dB ) SNR RF SNR DS Figure 3.26: Comparison of input S NRS im and S NROut of the band-pass RF filter (S NRRF) and
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
NASA Astrophysics Data System (ADS)
Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.
2017-11-01
We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Fundamental parameters of massive stars in multiple systems: The cases of HD 17505A and HD 206267A
NASA Astrophysics Data System (ADS)
Raucq, F.; Rauw, G.; Mahy, L.; Simón-Díaz, S.
2018-06-01
Context. Many massive stars are part of binary or higher multiplicity systems. The present work focusses on two higher multiplicity systems: HD 17505A and HD 206267A. Aims: Determining the fundamental parameters of the components of the inner binary of these systems is mandatory to quantify the impact of binary or triple interactions on their evolution. Methods: We analysed high-resolution optical spectra to determine new orbital solutions of the inner binary systems. After subtracting the spectrum of the tertiary component, a spectral disentangling code was applied to reconstruct the individual spectra of the primary and secondary. We then analysed these spectra with the non-LTE model atmosphere code CMFGEN to establish the stellar parameters and the CNO abundances of these stars. Results: The inner binaries of these systems have eccentric orbits with e 0.13 despite their relatively short orbital periods of 8.6 and 3.7 days for HD 17505Aa and HD 206267Aa, respectively. Slight modifications of the CNO abundances are found in both components of each system. The components of HD 17505Aa are both well inside their Roche lobe, whilst the primary of HD 206267Aa nearly fills its Roche lobe around periastron passage. Whilst the rotation of the primary of HD 206267Aa is in pseudo-synchronization with the orbital motion, the secondary displays a rotation rate that is higher. Conclusions: The CNO abundances and properties of HD 17505Aa can be explained by single star evolutionary models accounting for the effects of rotation, suggesting that this system has not yet experienced binary interaction. The properties of HD 206267Aa suggest that some intermittent binary interaction might have taken place during periastron passages, but is apparently not operating anymore. Based on observations collected with the TIGRE telescope (La Luz, Mexico), the 1.93 m telescope at Observatoire de Haute Provence (France), the Nordic Optical Telescope at the Observatorio del Roque de los Muchachos (La Palma, Spain), and the Canada-France-Hawaii telescope (Mauna Kea, Hawaii).
Medical image processing using neural networks based on multivalued and universal binary neurons
NASA Astrophysics Data System (ADS)
Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.
1998-06-01
Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.
Do Children Understand Fraction Addition?
ERIC Educational Resources Information Center
Braithwaite, David W.; Tian, Jing; Siegler, Robert S.
2017-01-01
Many children fail to master fraction arithmetic even after years of instruction. A recent theory of fraction arithmetic (Braithwaite, Pyke, & Siegler, in press) hypothesized that this poor learning of fraction arithmetic procedures reflects poor conceptual understanding of them. To test this hypothesis, we performed three experiments…
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Methodology for fast detection of false sharing in threaded scientific codes
Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang
2014-11-25
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.
Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes
2014-12-01
Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes
2014-01-01
Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. PMID:25450162
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV
NASA Astrophysics Data System (ADS)
Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.
2005-06-01
The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.
Aztec arithmetic revisited: land-area algorithms and Acolhua congruence arithmetic.
Williams, Barbara J; Jorge y Jorge, María del Carmen
2008-04-04
Acolhua-Aztec land records depicting areas and side dimensions of agricultural fields provide insight into Aztec arithmetic. Hypothesizing that recorded areas resulted from indigenous calculation, in a study of sample quadrilateral fields we found that 60% of the area values could be reproduced exactly by computation. In remaining cases, discrepancies between computed and recorded areas were consistently small, suggesting use of an unknown indigenous arithmetic. In revisiting the research, we discovered evidence for the use of congruence principles, based on proportions between the standard linear Acolhua measure and their units of shorter length. This procedure substitutes for computation with fractions and is labeled "Acolhua congruence arithmetic." The findings also clarify variance between Acolhua and Tenochca linear units, long an issue in understanding Aztec metrology.
Reconfigurable pipelined processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saccardi, R.J.
1989-09-19
This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less
Single-digit arithmetic processing—anatomical evidence from statistical voxel-based lesion analysis
Mihulowicz, Urszula; Willmes, Klaus; Karnath, Hans-Otto; Klein, Elise
2014-01-01
Different specific mechanisms have been suggested for solving single-digit arithmetic operations. However, the neural correlates underlying basic arithmetic (multiplication, addition, subtraction) are still under debate. In the present study, we systematically assessed single-digit arithmetic in a group of acute stroke patients (n = 45) with circumscribed left- or right-hemispheric brain lesions. Lesion sites significantly related to impaired performance were found only in the left-hemisphere damaged (LHD) group. Deficits in multiplication and addition were related to subcortical/white matter brain regions differing from those for subtraction tasks, corroborating the notion of distinct processing pathways for different arithmetic tasks. Additionally, our results further point to the importance of investigating fiber pathways in numerical cognition. PMID:24847238
Deaño, Manuel Deaño; Alfonso, Sonia; Das, Jagannath Prasad
2015-03-01
This study reports the cognitive and arithmetic improvement of a mathematical model based on the program PASS Remedial Program (PREP), which aims to improve specific cognitive processes underlying academic skills such as arithmetic. For this purpose, a group of 20 students from the last four grades of Primary Education was divided into two groups. One group (n=10) received training in the program and the other served as control. Students were assessed at pre and post intervention in the PASS cognitive processes (planning, attention, simultaneous and successive processing), general level of intelligence, and arithmetic performance in calculus and solving problems. Performance of children from the experimental group was significantly higher than that of the control group in cognitive process and arithmetic. This joint enhancement of cognitive and arithmetic processes was a result of the operationalization of training that promotes the encoding task, attention and planning, and learning by induction, mediation and verbalization. The implications of this are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.
2012-01-01
The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n=279; mean age=7.59 yrs) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems at start of 2nd grade and on calculations, word problems, and pre-algebraic knowledge at end of 3rd grade. Multilevel path analysis, controlling for instructional effects associated with the sequence of classrooms in which students were nested across grades 2–3, indicated arithmetic calculations and word problems are foundational to pre-algebraic knowledge. Also, results revealed direct contributions of nonverbal reasoning and oral language to pre-algebraic knowledge, beyond indirect effects that are mediated via arithmetic calculations and word problems. By contrast, attentive behavior, phonological processing, and processing speed contributed to pre-algebraic knowledge only indirectly via arithmetic calculations and word problems. PMID:22409764
A natural history of mathematics: George Peacock and the making of English algebra.
Lambert, Kevin
2013-06-01
In a series of papers read to the Cambridge Philosophical Society through the 1820s, the Cambridge mathematician George Peacock laid the foundation for a natural history of arithmetic that would tell a story of human progress from counting to modern arithmetic. The trajectory of that history, Peacock argued, established algebraic analysis as a form of universal reasoning that used empirically warranted operations of mind to think with symbols on paper. The science of counting would suggest arithmetic, arithmetic would suggest arithmetical algebra, and, finally, arithmetical algebra would suggest symbolic algebra. This philosophy of suggestion provided the foundation for Peacock's "principle of equivalent forms," which justified the practice of nineteenth-century English symbolic algebra. Peacock's philosophy of suggestion owed a considerable debt to the early Cambridge Philosophical Society culture of natural history. The aim of this essay is to show how that culture of natural history was constitutively significant to the practice of nineteenth-century English algebra.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Photometric Solutions of Three Eclipsing Binary Stars Observed from Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Liu, N.; Fu, J. N.; Zong, W.; Wang, L. Z.; Uddin, S. A.; Zhang, X. B.; Zhang, Y. P.; Cang, T. Q.; Li, G.; Yang, Y.; Yang, G. C.; Mould, J.; Morrell, N.
2018-04-01
Based on spectroscopic observations for the eclipsing binaries CSTAR 036162 and CSTAR 055495 with the WiFeS/2.3 m telescope at SSO and CSTAR 057775 with the Mage/Magellan I at LCO in 2017, stellar parameters are derived. More than 100 nights of almost-continuous light curves reduced from the time-series photometric observations by CSTAR at Dome A of Antarctic in i in 2008 and in g and r in 2009, respectively, are applied to find photometric solutions for the three binaries with the Wilson–Devinney code. The results show that CSTAR 036162 is a detached configuration with the mass ratio q = 0.354 ± 0.0009, while CSTAR 055495 is a semi-detached binary system with the unusual q = 0.946 ± 0.0006, which indicates that CSTAR 055495 may be a rare binary system with mass ratio close to one and the secondary component filling its Roche Lobe. This implies that a mass-ratio reversal has just occurred and CSTAR 055495 is in a rapid mass-transfer stage. Finally, CSTAR 057775 is believed to be an A-type W UMa binary with q = 0.301 ± 0.0008 and a fill-out factor of f = 0.742(8).
Jenks, Kathleen M; de Moor, Jan; van Lieshout, Ernest C D M
2009-07-01
Although it is believed that children with cerebral palsy are at high risk for learning difficulties and arithmetic difficulties in particular, few studies have investigated this issue. Arithmetic ability was longitudinally assessed in children with cerebral palsy in special (n = 41) and mainstream education (n = 16) and controls in mainstream education (n = 16). Second grade executive function and working memory scores were used to predict third grade arithmetic accuracy and response time. Children with cerebral palsy in special education were less accurate and slower than their peers on all arithmetic tests, even after controlling for IQ, whereas children with cerebral palsy in mainstream education performed as well as controls. Although the performance gap became smaller over time, it did not disappear. Children with cerebral palsy in special education showed evidence of executive function and working memory deficits in shifting, updating, visuospatial sketchpad and phonological loop (for digits, not words) whereas children with cerebral palsy in mainstream education only had a deficit in visuospatial sketchpad. Hierarchical regression revealed that, after controlling for intelligence, components of executive function and working memory explained large proportions of unique variance in arithmetic accuracy and response time and these variables were sufficient to explain group differences in simple, but not complex, arithmetic. Children with cerebral palsy are at risk for specific executive function and working memory deficits that, when present, increase the risk for arithmetic difficulties in these children.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Conceptual Knowledge of Fraction Arithmetic
ERIC Educational Resources Information Center
Siegler, Robert S.; Lortie-Forgues, Hugues
2015-01-01
Understanding an arithmetic operation implies, at minimum, knowing the direction of effects that the operation produces. However, many children and adults, even those who execute arithmetic procedures correctly, may lack this knowledge on some operations and types of numbers. To test this hypothesis, we presented preservice teachers (Study 1),…
ERIC Educational Resources Information Center
Rourke, Byron P.; Conway, James A.
1997-01-01
Reviews current research on brain-behavior relationships in disabilities of arithmetic and mathematical reasoning from both a neurological and a neuropsychological perspective. Defines developmental dyscalculia and the developmental importance of right versus left hemisphere integrity for the mediation of arithmetic learning and explores…
The Design and Implementation of a Translator for Arithmetic and Boolean Expressions.
1980-01-01
fault toleranc- P-or imTarop-r token.- or incorrect tok- n or(I-r. F C icD - MUS" hc-IV CnP 2J n,-)2ri 9 Iot t rq 3C003 G~T. 3OD 0L 3COC~ 5G F. 30005 ... 30005 .GE. ph 50015 60. C 30006 .LE. DP 50016 61. C RELPNT-INTEGER, POINTER TO ENTRIES IN RELOPS ARRAYY 62. C TEMP -INTEGER, USED FOR DETERMINING IF...30002, 50012, 30003, 50013, 30004, 50014, 30005 , 50015, 71. 1 30006, 50016/, SUB/50006/, THEN/142408/ 72. C 73. C ******** EXECUTABLE CODE******** 74
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Computer simulation of radiation damage in gallium arsenide
NASA Technical Reports Server (NTRS)
Stith, John J.; Davenport, James C.; Copeland, Randolph L.
1989-01-01
A version of the binary-collision simulation code MARLOWE was used to study the spatial characteristics of radiation damage in proton and electron irradiated gallium arsenide. Comparisons made with the experimental results proved to be encouraging.
APPLICATION OF GAS DYNAMICAL FRICTION FOR PLANETESIMALS. II. EVOLUTION OF BINARY PLANETESIMALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grishin, Evgeni; Perets, Hagai B.
2016-04-01
One of the first stages of planet formation is the growth of small planetesimals and their accumulation into large planetesimals and planetary embryos. This early stage occurs long before the dispersal of most of the gas from the protoplanetary disk. At this stage gas–planetesimal interactions play a key role in the dynamical evolution of single intermediate-mass planetesimals (m{sub p} ∼ 10{sup 21}–10{sup 25} g) through gas dynamical friction (GDF). A significant fraction of all solar system planetesimals (asteroids and Kuiper-belt objects) are known to be binary planetesimals (BPs). Here, we explore the effects of GDF on the evolution of BPs embedded inmore » a gaseous disk using an N-body code with a fiducial external force accounting for GDF. We find that GDF can induce binary mergers on timescales shorter than the disk lifetime for masses above m{sub p} ≳ 10{sup 22} g at 1 au, independent of the binary initial separation and eccentricity. Such mergers can affect the structure of merger-formed planetesimals, and the GDF-induced binary inspiral can play a role in the evolution of the planetesimal disk. In addition, binaries on eccentric orbits around the star may evolve in the supersonic regime, where the torque reverses and the binary expands, which would enhance the cross section for planetesimal encounters with the binary. Highly inclined binaries with small mass ratios, evolve due to the combined effects of Kozai–Lidov (KL) cycles with GDF which lead to chaotic evolution. Prograde binaries go through semi-regular KL evolution, while retrograde binaries frequently flip their inclination and ∼50% of them are destroyed.« less
Wolf-Rayet stars, black holes and the first detected gravitational wave source
NASA Astrophysics Data System (ADS)
Bogomazov, A. I.; Cherepashchuk, A. M.; Lipunov, V. M.; Tutukov, A. V.
2018-01-01
The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes (Masevich and Tutukov, 1988). To study the evolution of the GW150914 predecessor we use the ;Scenario Machine; code presented by Lipunov et al. (1996). The scenario modeling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be 100÷140M⊙ and the initial separation of the components can be 50÷350R⊙. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modeling based on it.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
Mapping the Milky Way Galaxy with LISA
NASA Technical Reports Server (NTRS)
McKinnon, Jose A.; Littenberg, Tyson
2012-01-01
Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way
Schroedinger’s code: Source code availability and transparency in astrophysics
NASA Astrophysics Data System (ADS)
Ryan, PW; Allen, Alice; Teuben, Peter
2018-01-01
Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.
Children Learn Spurious Associations in Their Math Textbooks: Examples from Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Siegler, Robert S.
2018-01-01
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge--rather than understanding of mathematical concepts and procedures--to…
A Computational Model of Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.
2017-01-01
Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…
Arithmetic 400. A Computer Educational Program.
ERIC Educational Resources Information Center
Firestein, Laurie
"ARITHMETIC 400" is the first of the next generation of educational programs designed to encourage thinking about arithmetic problems. Presented in video game format, performance is a measure of correctness, speed, accuracy, and fortune as well. Play presents a challenge to individuals at various skill levels. The program, run on an Apple…
Simulating Network Retrieval of Arithmetic Facts.
ERIC Educational Resources Information Center
Ashcraft, Mark H.
This report describes a simulation of adults' retrieval of arithmetic facts from a network-based memory representation. The goals of the simulation project are to: demonstrate in specific form the nature of a spreading activation model of mental arithmetic; account for three important reaction time effects observed in laboratory investigations;…
Individual Differences in Children's Understanding of Inversion and Arithmetical Skill
ERIC Educational Resources Information Center
Gilmore, Camilla K.; Bryant, Peter
2006-01-01
Background and aims: In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between…
The Practice of Arithmetic in Liberian Schools.
ERIC Educational Resources Information Center
Brenner, Mary E.
1985-01-01
Describes a study of Liberian schools in which students of the Vai tribe are instructed in Western mathematical practices which differ from those of the students' home culture. Reports that the Vai children employed syncretic arithmetic practices, combining two distinct systems of arithmetic in a classroom environment that tacitly facilitated the…
From Arithmetic Sequences to Linear Equations
ERIC Educational Resources Information Center
Matsuura, Ryota; Harless, Patrick
2012-01-01
The first part of the article focuses on deriving the essential properties of arithmetic sequences by appealing to students' sense making and reasoning. The second part describes how to guide students to translate their knowledge of arithmetic sequences into an understanding of linear equations. Ryota Matsuura originally wrote these lessons for…
Baby Arithmetic: One Object Plus One Tone
ERIC Educational Resources Information Center
Kobayashi, Tessei; Hiraki, Kazuo; Mugitani, Ryoko; Hasegawa, Toshikazu
2004-01-01
Recent studies using a violation-of-expectation task suggest that preverbal infants are capable of recognizing basic arithmetical operations involving visual objects. There is still debate, however, over whether their performance is based on any expectation of the arithmetical operations, or on a general perceptual tendency to prefer visually…
Conceptual Knowledge of Decimal Arithmetic
ERIC Educational Resources Information Center
Lortie-Forgues, Hugues; Siegler, Robert S.
2016-01-01
In two studies (N's = 55 and 54), we examined a basic form of conceptual understanding of rational number arithmetic, the direction of effect of decimal arithmetic operations, at a level of detail useful for informing instruction. Middle school students were presented tasks examining knowledge of the direction of effects (e.g., "True or…
IBM system/360 assembly language interval arithmetic software
NASA Technical Reports Server (NTRS)
Phillips, E. J.
1972-01-01
Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.
Egeland, Jens; Bosnes, Ole; Johansen, Hans
2009-09-01
Confirmatory Factor Analyses (CFA) of the Wechsler Adult Intelligence Scale-III (WAIS-III) lend partial support to the four-factor model proposed in the test manual. However, the Arithmetic subtest has been especially difficult to allocate to one factor. Using the new Norwegian WAIS-III version, we tested factor models differing in the number of factors and in the placement of the Arithmetic subtest in a mixed clinical sample (n = 272). Only the four-factor solutions had adequate goodness-of-fit values. Allowing Arithmetic to load on both the Verbal Comprehension and Working Memory factors provided a more parsimonious solution compared to considering the subtest only as a measure of Working Memory. Effects of education were particularly high for both the Verbal Comprehension tests and Arithmetic.
If Gravity is Geometry, is Dark Energy just Arithmetic?
NASA Astrophysics Data System (ADS)
Czachor, Marek
2017-04-01
Arithmetic operations (addition, subtraction, multiplication, division), as well as the calculus they imply, are non-unique. The examples of four-dimensional spaces, R+4 and (- L/2, L/2)4, are considered where different types of arithmetic and calculus coexist simultaneously. In all the examples there exists a non-Diophantine arithmetic that makes the space globally Minkowskian, and thus the laws of physics are formulated in terms of the corresponding calculus. However, when one switches to the `natural' Diophantine arithmetic and calculus, the Minkowskian character of the space is lost and what one effectively obtains is a Lorentzian manifold. I discuss in more detail the problem of electromagnetic fields produced by a pointlike charge. The solution has the standard form when expressed in terms of the non-Diophantine formalism. When the `natural' formalsm is used, the same solution looks as if the fields were created by a charge located in an expanding universe, with nontrivially accelerating expansion. The effect is clearly visible also in solutions of the Friedman equation with vanishing cosmological constant. All of this suggests that phenomena attributed to dark energy may be a manifestation of a miss-match between the arithmetic employed in mathematical modeling, and the one occurring at the level of natural laws. Arithmetic is as physical as geometry.
Children learn spurious associations in their math textbooks: Examples from fraction arithmetic.
Braithwaite, David W; Siegler, Robert S
2018-04-26
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge-rather than understanding of mathematical concepts and procedures-to guide choices of solution strategies. They further proposed that this associative knowledge reflects distributional characteristics of the fraction arithmetic problems children encounter. To test these hypotheses, we examined textbooks and middle school children in the United States (Experiments 1 and 2) and China (Experiment 3). We asked the children to predict which arithmetic operation would accompany a specified pair of operands, to generate operands to accompany a specified arithmetic operation, and to match operands and operations. In both countries, children's responses indicated that they associated operand pairs having equal denominators with addition and subtraction, and operand pairs having a whole number and a fraction with multiplication and division. The children's associations paralleled the textbook input in both countries, which was consistent with the hypothesis that children learned the associations from the practice problems. Differences in the effects of such associative knowledge on U.S. and Chinese children's fraction arithmetic performance are discussed, as are implications of these differences for educational practice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
Flexible digital modulation and coding synthesis for satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John
1991-01-01
An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.
Binary power multiplier for electromagnetic energy
Farkas, Zoltan D.
1988-01-01
A technique for converting electromagnetic pulses to higher power amplitude and shorter duration, in binary multiples, splits an input pulse into two channels, and subjects the pulses in the two channels to a number of binary pulse compression operations. Each pulse compression operation entails combining the pulses in both input channels and selectively steering the combined power to one output channel during the leading half of the pulses and to the other output channel during the trailing half of the pulses, and then delaying the pulse in the first output channel by an amount equal to half the initial pulse duration. Apparatus for carrying out each of the binary multiplication operation preferably includes a four-port coupler (such as a 3 dB hybrid), which operates on power inputs at a pair of input ports by directing the combined power to either of a pair of output ports, depending on the relative phase of the inputs. Therefore, by appropriately phase coding the pulses prior to any of the pulse compression stages, the entire pulse compression (with associated binary power multiplication) can be carried out solely with passive elements.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Simulated single molecule microscopy with SMeagol.
Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan
2016-08-01
SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
On the Existence of t-Identifying Codes in Undirected De Bruijn Networks
2015-08-04
remaining cases remain open. Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 15. SUBJECT TERMS...Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 1 Introduction and Background Let x ∈ V (G), and...we must have d(y, x) = n + 2. In other words, Theorem 2.5 tells us the eccentricity of every node in the graph B(d, n) is n for d ≥ 3, and so the
Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs
2009-12-01
50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE
Individual differences in children's understanding of inversion and arithmetical skill.
Gilmore, Camilla K; Bryant, Peter
2006-06-01
Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.
17 CFR 232.11 - Definition of terms used in part 232.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., PDF, and static graphic files. Such code may be in binary (machine language) or in script form... Act means the Trust Indenture Act of 1939. Unofficial PDF copy. The term unofficial PDF copy means an...
NASA Astrophysics Data System (ADS)
Xia, Bing
Ultrafast optical signal processing, which shares the same fundamental principles of electrical signal processing, can realize numerous important functionalities required in both academic research and industry. Due to the extremely fast processing speed, all-optical signal processing and pulse shaping have been widely used in ultrafast telecommunication networks, photonically-assisted RFlmicro-meter waveform generation, microscopy, biophotonics, and studies on transient and nonlinear properties of atoms and molecules. In this thesis, we investigate two types of optical spectrally-periodic (SP) filters that can be fabricated on planar lightwave circuits (PLC) to perform pulse repetition rate multiplication (PRRM) and arbitrary optical waveform generation (AOWG). First, we present a direct temporal domain approach for PRRM using SP filters. We show that the repetition rate of an input pulse train can be multiplied by a factor N using an optical filter with a free spectral range that does not need to be constrained to an integer multiple of N. Furthermore, the amplitude of each individual output pulse can be manipulated separately to form an arbitrary envelope at the output by optimizing the impulse response of the filter. Next, we use lattice-form Mach-Zehnder interferometers (LF-MZI) to implement the temporal domain approach for PRRM. The simulation results show that PRRM with uniform profiles, binary-code profiles and triangular profiles can be achieved. Three silica based LF-MZIs are designed and fabricated, which incorporate multi-mode interference (MMI) couplers and phase shifters. The experimental results show that 40 GHz pulse trains with a uniform envelope pattern, a binary code pattern "1011" and a binary code pattern "1101" are generated from a 10 GHz input pulse train. Finally, we investigate 2D ring resonator arrays (RRA) for ultraf ast optical signal processing. We design 2D RRAs to generate a pair of pulse trains with different binary-code patterns simultaneously from a single pulse train at a low repetition rate. We also design 2D RRAs for AOWG using the modified direct temporal domain approach. To demonstrate the approach, we provide numerical examples to illustrate the generation of two very different waveforms (square waveform and triangular waveform) from the same hyperbolic secant input pulse train. This powerful technique based on SP filters can be very useful for ultrafast optical signal processing and pulse shaping.
A Substituting Meaning for the Equals Sign in Arithmetic Notating Tasks
ERIC Educational Resources Information Center
Jones, Ian; Pratt, Dave
2012-01-01
Three studies explore arithmetic tasks that support both substitutive and basic relational meanings for the equals sign. The duality of meanings enabled children to engage meaningfully and purposefully with the structural properties of arithmetic statements in novel ways. Some, but not all, children were successful at the adapted task and were…
Children's Acquisition of Arithmetic Principles: The Role of Experience
ERIC Educational Resources Information Center
Prather, Richard; Alibali, Martha W.
2011-01-01
The current study investigated how young learners' experiences with arithmetic equations can lead to learning of an arithmetic principle. The focus was elementary school children's acquisition of the Relation to Operands principle for subtraction (i.e., for natural numbers, the difference must be less than the minuend). In Experiment 1, children…
ERIC Educational Resources Information Center
Koontz, Kristine L.; Berch, Daniel B.
1996-01-01
Children with arithmetic learning disabilities (n=16) and normally achieving controls (n=16) in grades 3-5 were administered a battery of computerized tasks. Memory spans for both letters and digits were found to be smaller among the arithmetic learning disabled children. Implications for teaching are discussed. (Author/CMS)
Arithmetic Abilities in Children with Developmental Dyslexia: Performance on French ZAREKI-R Test
ERIC Educational Resources Information Center
De Clercq-Quaegebeur, Maryse; Casalis, Séverine; Vilette, Bruno; Lemaitre, Marie-Pierre; Vallée, Louis
2018-01-01
A high comorbidity between reading and arithmetic disabilities has already been reported. The present study aims at identifying more precisely patterns of arithmetic performance in children with developmental dyslexia, defined with severe and specific criteria. By means of a standardized test of achievement in mathematics ("Calculation and…
How Is Phonological Processing Related to Individual Differences in Children's Arithmetic Skills?
ERIC Educational Resources Information Center
De Smedt, Bert; Taylor, Jessica; Archibald, Lisa; Ansari, Daniel
2010-01-01
While there is evidence for an association between the development of reading and arithmetic, the precise locus of this relationship remains to be determined. Findings from cognitive neuroscience research that point to shared neural correlates for phonological processing and arithmetic as well as recent behavioral evidence led to the present…
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Arithmetic Performance of Children with Cerebral Palsy: The Influence of Cognitive and Motor Factors
ERIC Educational Resources Information Center
van Rooijen, Maaike; Verhoeven, Ludo; Smits, Dirk-Wouter; Ketelaar, Marjolijn; Becher, Jules G.; Steenbergen, Bert
2012-01-01
Children diagnosed with cerebral palsy (CP) often show difficulties in arithmetic compared to their typically developing peers. The present study explores whether cognitive and motor variables are related to arithmetic performance of a large group of primary school children with CP. More specifically, the relative influence of non-verbal…
Cognitive Arithmetic: Evidence for the Development of Automaticity.
ERIC Educational Resources Information Center
LeFevre, Jo-Anne; Bisanz, Jeffrey
To determine whether children's knowledge of arithmetic facts becomes increasingly "automatic" with age, 7-year-olds, 11-year-olds, and adults were given a number-matching task for which mental arithmetic should have been irrelevant. Specifically, students were required to verify the presence of a probe number in a previously presented pair (e.g.,…
ERIC Educational Resources Information Center
McNeil, Nicole M.; Rittle-Johnson, Bethany; Hattikudur, Shanta; Petersen, Lori A.
2010-01-01
This study examined if solving arithmetic problems hinders undergraduates' accuracy on algebra problems. The hypothesis was that solving arithmetic problems would hinder accuracy because it activates an operational view of equations, even in educated adults who have years of experience with algebra. In three experiments, undergraduates (N = 184)…
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Frontoparietal white matter diffusion properties predict mental arithmetic skills in children
Tsang, Jessica M.; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.; Ben-Shachar, Michal
2009-01-01
Functional MRI studies of mental arithmetic consistently report blood oxygen level–dependent signals in the parietal and frontal regions. We tested whether white matter pathways connecting these regions are related to mental arithmetic ability by using diffusion tensor imaging (DTI) to measure these pathways in 28 children (age 10–15 years, 14 girls) and assessing their mental arithmetic skills. For each child, we identified anatomically the anterior portion of the superior longitudinal fasciculus (aSLF), a pathway connecting parietal and frontal cortex. We measured fractional anisotropy in a core region centered along the length of the aSLF. Fractional anisotropy in the left aSLF positively correlates with arithmetic approximation skill, as measured by a mental addition task with approximate answer choices. The correlation is stable in adjacent core aSLF regions but lower toward the pathway endpoints. The correlation is not explained by shared variance with other cognitive abilities and did not pass significance in the right aSLF. These measurements used DTI, a structural method, to test a specific functional model of mental arithmetic. PMID:19948963
Chaya, Mayasandra S; Nagendra, Hongasandra; Selvam, Sumithra; Kurpad, Anura; Srinivasan, Krishnamachari
2012-12-01
The objective of this study was to assess the effect of yoga, compared to physical activity on the cognitive performance in 7-9 year-old schoolchildren from a socioeconomic disadvantaged background. Two hundred (200) schoolchildren from Bangalore, India, after baseline assessment of cognitive functioning were randomly allocated to either a yoga or a physical-activity group. Cognitive functions (attention and concentration, visuo-spatial abilities, verbal ability, and abstract thinking) were assessed using an Indian adaptation of the Wechsler Intelligence Scale for Children at baseline, after 3 months of intervention, and later at a 3-month follow-up. Of the 200 subjects, 193 were assessed at 3 months after the study, and then 180 were assessed at the 3-month follow-up. There were no significant differences in cognitive performance between the two study groups (yoga versus physical activity) at postintervention, after controlling for grade levels. Improvement in the mean scores of cognitive tests following intervention varied from 0.5 (Arithmetic) to 1.4 (Coding) for the yoga group and 0.7 (Arithmetic) to 1.6 (Vocabulary) in the physical-activity group. Yoga was as effective as physical activity in improving cognitive performance in 7-9 year old schoolchildren. Further studies are needed to examine the dose-response relationship between yoga and cognitive performance.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Real coded genetic algorithm for fuzzy time series prediction
NASA Astrophysics Data System (ADS)
Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.
2017-10-01
Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.
Formal specification and verification of Ada software
NASA Technical Reports Server (NTRS)
Hird, Geoffrey R.
1991-01-01
The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.
NASA Astrophysics Data System (ADS)
Qian, S.-B.; Liu, L.; Zhu, L.-Y.; He, J.-J.; Yang, Y.-G.; Bernasconi, L.
2011-05-01
The newly discovered short-period close binary star, XY LMi, has been monitored photometrically since 2006. Its light curves are typical EW-type light curves and show complete eclipses with durations of about 80 minutes. Photometric solutions were determined through an analysis of the complete B, V, R, and I light curves using the 2003 version of the Wilson-Devinney code. XY LMi is a high fill-out, extreme mass ratio overcontact binary system with a mass ratio of q = 0.148 and a fill-out factor of f = 74.1%, suggesting that it is in the late evolutionary stage of late-type tidal-locked binary stars. As observed in other overcontact binary stars, evidence for the presence of two dark spots on both components is given. Based on our 19 epochs of eclipse times, we found that the orbital period of the overcontact binary is decreasing continuously at a rate of dP/dt = -1.67 × 10-7 days yr-1, which may be caused by mass transfer from the primary to the secondary and/or angular momentum loss via magnetic stellar wind. The decrease of the orbital period may result in the increase of the fill-out, and finally, it will evolve into a single rapid-rotation star when the fluid surface reaches the outer critical Roche lobe.
Fundamental finite key limits for one-way information reconciliation in quantum key distribution
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Martinez-Mateo, Jesus; Pacher, Christoph; Elkouss, David
2017-11-01
The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that one-way information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during information reconciliation is not generally valid. We propose an improved approximation that takes into account finite key effects and numerically test it against codes for two probability distributions, that we call binary-binary and binary-Gaussian, that typically appear in quantum key distribution protocols.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.
Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B
2017-10-15
We experimentally demonstrate self-adaptive coded 5×100 Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.
Optical LDPC decoders for beyond 100 Gbits/s optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2009-05-01
We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.