Sample records for binary linear codes

  1. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  2. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  3. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  4. A m-ary linear feedback shift register with binary logic

    NASA Technical Reports Server (NTRS)

    Perlman, M. (Inventor)

    1973-01-01

    A family of m-ary linear feedback shift registers with binary logic is disclosed. Each m-ary linear feedback shift register with binary logic generates a binary representation of a nonbinary recurring sequence, producible with a m-ary linear feedback shift register without binary logic in which m is greater than 2. The state table of a m-ary linear feedback shift register without binary logic, utilizing sum modulo m feedback, is first tubulated for a given initial state. The entries in the state table are coded in binary and the binary entries are used to set the initial states of the stages of a plurality of binary shift registers. A single feedback logic unit is employed which provides a separate feedback binary digit to each binary register as a function of the states of corresponding stages of the binary registers.

  5. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  6. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  7. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  8. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    NASA Astrophysics Data System (ADS)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  9. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  10. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  11. Improvements to the construction of binary black hole initial data

    NASA Astrophysics Data System (ADS)

    Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald P.; Boyle, Michael; Szilágyi, Béla

    2015-12-01

    Construction of binary black hole initial data is a prerequisite for numerical evolutions of binary black holes. This paper reports improvements to the binary black hole initial data solver in the spectral Einstein code, to allow robust construction of initial data for mass-ratio above 10:1, and for dimensionless black hole spins above 0.9, while improving efficiency for lower mass-ratios and spins. We implement a more flexible domain decomposition, adaptive mesh refinement and an updated method for choosing free parameters. We also introduce a new method to control and eliminate residual linear momentum in initial data for precessing systems, and demonstrate that it eliminates gravitational mode mixing during the evolution. Finally, the new code is applied to construct initial data for hyperbolic scattering and for binaries with very small separation.

  12. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  13. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  14. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  15. Polar codes for achieving the classical capacity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Wilde, Mark

    2012-02-01

    We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.

  16. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  17. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  18. Analysis of the possibility of using G.729 codec for steganographic transmission

    NASA Astrophysics Data System (ADS)

    Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław

    2017-04-01

    Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.

  19. Surface relief structures for multiple beam LO generation

    NASA Technical Reports Server (NTRS)

    Veldkamp, W. B.

    1980-01-01

    Linear and binary holograms for use in heterodyne detection with 10.6 micron imaging arrays are described. The devices match the amplitude and phase of the local oscillator to the received signal and thus maximize the system signal to noise ratio and resolution and minimize heat generation on the focal plane. In both the linear and binary approaches, the holographic surface-relief pattern is coded to generate a set of local oscillator beams when the relief pattern is illuminated by a single planewave. Each beam of this set has the same amplitude shape distribution as, and is collinear with, each single element wavefront illuminating array.

  20. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  1. Galerkin-collocation domain decomposition method for arbitrary binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-05-01

    We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.

  2. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  3. Low sidelobe level and high time resolution for metallic ultrasonic testing with linear-chirp-Golay coded excitation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen

    2018-04-01

    Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.

  4. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  5. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  6. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  7. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  8. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  9. Black-hole kicks from numerical-relativity surrogate models

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Hébert, François; Stein, Leo C.

    2018-05-01

    Binary black holes radiate linear momentum in gravitational waves as they merge. Recoils imparted to the black-hole remnant can reach thousands of km /s , thus ejecting black holes from their host galaxies. We exploit recent advances in gravitational waveform modeling to quickly and reliably extract recoils imparted to generic, precessing, black-hole binaries. Our procedure uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters; then, from this waveform we directly integrate the gravitational-wave linear momentum flux. This entirely bypasses the need for fitting formulas which are typically used to model black-hole recoils in astrophysical contexts. We provide a thorough exploration of the black-hole kick phenomenology in the parameter space, summarizing and extending previous numerical results on the topic. Our extraction procedure is made publicly available as a module for the Python programming language named surrkick. Kick evaluations take ˜0.1 s on a standard off-the-shelf machine, thus making our code ideal to be ported to large-scale astrophysical studies.

  10. Performance Analysis of New Binary User Codes for DS-CDMA Communication

    NASA Astrophysics Data System (ADS)

    Usha, Kamle; Jaya Sankar, Kottareddygari

    2016-03-01

    This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.

  11. Simulations of linear and Hamming codes using SageMath

    NASA Astrophysics Data System (ADS)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  12. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  13. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  14. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  15. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  16. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  17. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  18. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  19. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  20. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  1. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  2. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  3. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  4. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  5. Binary to Octal and Octal to Binary Code Converter Using Mach-Zehnder Interferometer for High Speed Communication

    NASA Astrophysics Data System (ADS)

    Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep

    2017-05-01

    Binary to octal and octal to binary code converter is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using external gates. In this paper, binary to octal and octal to binary code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).

  6. Parallel optical image addition and subtraction in a dynamic photorefractive memory by phase-code multiplexing

    NASA Astrophysics Data System (ADS)

    Denz, Cornelia; Dellwig, Thilo; Lembcke, Jan; Tschudi, Theo

    1996-02-01

    We propose and demonstrate experimentally a method for utilizing a dynamic phase-encoded photorefractive memory to realize parallel optical addition, subtraction, and inversion operations of stored images. The phase-encoded holographic memory is realized in photorefractive BaTiO3, storing eight images using WalshHadamard binary phase codes and an incremental recording procedure. By subsampling the set of reference beams during the recall operation, the selectivity of the phase address is decreased, allowing one to combine images in such a way that different linear combination of the images can be realized at the output of the memory.

  7. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  8. An adaptable binary entropy coder

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  9. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  10. Mal-Xtract: Hidden Code Extraction using Memory Analysis

    NASA Astrophysics Data System (ADS)

    Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah

    2017-01-01

    Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.

  11. Study of a co-designed decision feedback equalizer, deinterleaver, and decoder

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.; Welch, Loyd

    1990-01-01

    A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.

  12. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  13. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  14. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  15. Spherical hashing: binary code embedding with hyperspheres.

    PubMed

    Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui

    2015-11-01

    Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.

  16. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  17. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  18. Probability Quantization for Multiplication-Free Binary Arithmetic Coding

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.

  19. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.

    PubMed

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.

  20. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder

    NASA Astrophysics Data System (ADS)

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.

  1. u-Constacyclic codes over F_p+u{F}_p and their applications of constructing new non-binary quantum codes

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Wang, Yongkang

    2018-01-01

    Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.

  2. Simulation of continuously logical base cells (CL BC) with advanced functions for analog-to-digital converters and image processors

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-10-01

    The paper considers results of design and modeling of continuously logical base cells (CL BC) based on current mirrors (CM) with functions of preliminary analogue and subsequent analogue-digital processing for creating sensor multichannel analog-to-digital converters (SMC ADCs) and image processors (IP). For such with vector or matrix parallel inputs-outputs IP and SMC ADCs it is needed active basic photosensitive cells with an extended electronic circuit, which are considered in paper. Such basic cells and ADCs based on them have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level for linear and matrix structures. We show design of the CL BC and ADC of photocurrents and their various possible implementations and its simulations. We consider CL BC for methods of selection and rank preprocessing and linear array of ADCs with conversion to binary codes and Gray codes. In contrast to our previous works here we will dwell more on analogue preprocessing schemes for signals of neighboring cells. Let us show how the introduction of simple nodes based on current mirrors extends the range of functions performed by the image processor. Each channel of the structure consists of several digital-analog cells (DC) on 15-35 CMOS. The amount of DC does not exceed the number of digits of the formed code, and for an iteration type, only one cell of DC, complemented by the device of selection and holding (SHD), is required. One channel of ADC with iteration is based on one DC-(G) and SHD, and it has only 35 CMOS transistors. In such ADCs easily parallel code can be realized and also serial-parallel output code. The circuits and simulation results of their design with OrCAD are shown. The supply voltage of the DC is 1.8÷3.3V, the range of an input photocurrent is 0.1÷24μA, the transformation time is 20÷30nS at 6-8 bit binary or Gray codes. The general power consumption of the ADC with iteration is only 50÷100μW, if the maximum input current is 4μA. Such simple structure of linear array of ADCs with low power consumption and supply voltage 3.3V, and at the same time with good dynamic characteristics (frequency of digitization even for 1.5μm CMOS-technologies is 40÷50 MHz, and can be increased up to 10 times) and accuracy characteristics are show. The SMC ADCs based on CL BC and CM opens new prospects for realization of linear and matrix IP and photo-electronic structures with matrix operands, which are necessary for neural networks, digital optoelectronic processors, neural-fuzzy controllers.

  3. Spectral Cauchy Characteristic Extraction: Gravitational Waves and Gauge Free News

    NASA Astrophysics Data System (ADS)

    Handmer, Casey; Szilagyi, Bela; Winicour, Jeff

    2015-04-01

    We present a fast, accurate spectral algorithm for the characteristic evolution of the full non-linear vacuum Einstein field equations in the Bondi framework. Developed within the Spectral Einstein Code (SpEC), we demonstrate how spectral Cauchy characteristic extraction produces gravitational News without confounding gauge effects. We explain several numerical innovations and demonstrate speed, stability, accuracy, exponential convergence, and consistency with existing methods. We highlight its capability to deliver physical insights in the study of black hole binaries.

  4. Structured codebook design in CELP

    NASA Technical Reports Server (NTRS)

    Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Codebook Excited Linear Protection (CELP) is a popular analysis by synthesis technique for quantizing speech at bit rates from 4 to 6 kbps. Codebook design techniques to date have been largely based on either random (often Gaussian) codebooks, or on known binary or ternary codes which efficiently map the space of (assumed white) excitation codevectors. It has been shown that by introducing symmetries into the codebook, good complexity reduction can be realized with only marginal decrease in performance. Codebook design algorithms are considered for a wide range of structured codebooks.

  5. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  6. INTRODUCING CAFein, A NEW COMPUTATIONAL TOOL FOR STELLAR PULSATIONS AND DYNAMIC TIDES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valsecchi, F.; Farr, W. M.; Willems, B.

    2013-08-10

    Here we present CAFein, a new computational tool for investigating radiative dissipation of dynamic tides in close binaries and of non-adiabatic, non-radial stellar oscillations in isolated stars in the linear regime. For the latter, CAFein computes the non-adiabatic eigenfrequencies and eigenfunctions of detailed stellar models. The code is based on the so-called Riccati method, a numerical algorithm that has been successfully applied to a variety of stellar pulsators, and which does not suffer from the major drawbacks of commonly used shooting and relaxation schemes. Here we present an extension of the Riccati method to investigate dynamic tides in close binaries.more » We demonstrate CAFein's capabilities as a stellar pulsation code both in the adiabatic and non-adiabatic regimes, by reproducing previously published eigenfrequencies of a polytrope, and by successfully identifying the unstable modes of a stellar model in the {beta} Cephei/SPB region of the Hertzsprung-Russell diagram. Finally, we verify CAFein's behavior in the dynamic tides regime by investigating the effects of dynamic tides on the eigenfunctions and orbital and spin evolution of massive main sequence stars in eccentric binaries, and of hot Jupiter host stars. The plethora of asteroseismic data provided by NASA's Kepler satellite, some of which include the direct detection of tidally excited stellar oscillations, make CAFein quite timely. Furthermore, the increasing number of observed short-period detached double white dwarfs (WDs) and the observed orbital decay in the tightest of such binaries open up a new possibility of investigating WD interiors through the effects of tides on their orbital evolution.« less

  7. Multi-level bandwidth efficient block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1989-01-01

    The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.

  8. Interactive Exploration for Continuously Expanding Neuron Databases.

    PubMed

    Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting

    2017-02-15

    This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Advanced complex trait analysis.

    PubMed

    Gray, A; Stewart, I; Tenesa, A

    2012-12-01

    The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.

  10. Digital plus analog output encoder

    NASA Technical Reports Server (NTRS)

    Hafle, R. S. (Inventor)

    1976-01-01

    The disclosed encoder is adapted to produce both digital and analog output signals corresponding to the angular position of a rotary shaft, or the position of any other movable member. The digital signals comprise a series of binary signals constituting a multidigit code word which defines the angular position of the shaft with a degree of resolution which depends upon the number of digits in the code word. The basic binary signals are produced by photocells actuated by a series of binary tracks on a code disc or member. The analog signals are in the form of a series of ramp signals which are related in length to the least significant bit of the digital code word. The analog signals are derived from sine and cosine tracks on the code disc.

  11. PopCORN: Hunting down the differences between binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org

  12. Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1999-01-01

    An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.

  13. On the existence of binary simplex codes. [using combinatorial construction

    NASA Technical Reports Server (NTRS)

    Taylor, H.

    1977-01-01

    Using a simple combinatorial construction, the existence of a binary simplex code with m codewords for all m is greater than or equal to 1 is proved. The problem of the shortest possible length is left open.

  14. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  15. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  16. Progress in understanding heavy-ion stopping

    NASA Astrophysics Data System (ADS)

    Sigmund, P.; Schinner, A.

    2016-09-01

    We report some highlights of our work with heavy-ion stopping in the energy range where Bethe stopping theory breaks down. Main tools are our binary stopping theory (PASS code), the reciprocity principle, and Paul's data base. Comparisons are made between PASS and three alternative theoretical schemes (CasP, HISTOP and SLPA). In addition to equilibrium stopping we discuss frozen-charge stopping, deviations from linear velocity dependence below the Bragg peak, application of the reciprocity principle in low-velocity stopping, modeling of equilibrium charges, and the significance of the so-called effective charge.

  17. Huffman scanning: using language models within fixed-grid keyboard emulation☆

    PubMed Central

    Roark, Brian; Beckley, Russell; Gibbons, Chris; Fried-Oken, Melanie

    2012-01-01

    Individuals with severe motor impairments commonly enter text using a single binary switch and symbol scanning methods. We present a new scanning method –Huffman scanning – which uses Huffman coding to select the symbols to highlight during scanning, thus minimizing the expected bits per symbol. With our method, the user can select the intended symbol even after switch activation errors. We describe two varieties of Huffman scanning – synchronous and asynchronous –and present experimental results, demonstrating speedups over row/column and linear scanning. PMID:24244070

  18. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  19. A novel approach of an absolute coding pattern based on Hamiltonian graph

    NASA Astrophysics Data System (ADS)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  20. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  1. Design of an optical 4-bit binary to BCD converter using electro-optic effect of lithium niobate based Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2017-07-01

    Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.

  2. Rotation invariant deep binary hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  3. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  4. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  5. INSPECTION MEANS FOR INDUCTION MOTORS

    DOEpatents

    Williams, A.W.

    1959-03-10

    an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.

  6. FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2015-06-01

    In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.

  7. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  8. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  9. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  10. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  11. Binary translation using peephole translation rules

    DOEpatents

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  12. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  13. Cognitive Code-Division Channelization

    DTIC Science & Technology

    2011-04-01

    22] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans...Commun., vol. 51, pp. 48-51, Jan. 2003. [23] C. Ding, M. Golin, and T. Klve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary...receiver pair coexisting with a primary code-division multiple-access ( CDMA ) system. Our objective is to find the optimum transmitting power and code

  14. The gravitational wave strain in the characteristic formalism of numerical relativity

    NASA Astrophysics Data System (ADS)

    Bishop, Nigel T.; Reisswig, Christian

    2014-01-01

    The extraction of the gravitational wave signal, within the context of a characteristic numerical evolution is revisited. A formula for the gravitational wave strain is developed and tested, and is made publicly available as part of the PITT code within the Einstein Toolkit. Using the new strain formula, we show that artificial non-linear drifts inherent in time integrated waveforms can be reduced for the case of a binary black hole merger configuration. For the test case of a rapidly spinning stellar core collapse model, however, we find that the drift must have different roots.

  15. BHDD: Primordial black hole binaries code

    NASA Astrophysics Data System (ADS)

    Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco

    2018-06-01

    BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.

  16. Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.

    ERIC Educational Resources Information Center

    Glaser, Anton

    This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…

  17. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  18. Learning Rotation-Invariant Local Binary Descriptor.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.

  19. Constructing binary black hole initial data with high mass ratios and spins

    NASA Astrophysics Data System (ADS)

    Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald; Szilagyi, Bela; Simulating Extreme Spacetimes Collaboration

    2015-04-01

    Binary black hole systems have now been successfully modelled in full numerical relativity by many groups. In order to explore high-mass-ratio (larger than 1:10), high-spin systems (above 0.9 of the maximal BH spin), we revisit the initial-data problem for binary black holes. The initial-data solver in the Spectral Einstein Code (SpEC) was not able to solve for such initial data reliably and robustly. I will present recent improvements to this solver, among them adaptive mesh refinement and control of motion of the center of mass of the binary, and will discuss the much larger region of parameter space this code can now address.

  20. Golay sequences coded coherent optical OFDM for long-haul transmission

    NASA Astrophysics Data System (ADS)

    Qin, Cui; Ma, Xiangrong; Hua, Tao; Zhao, Jing; Yu, Huilong; Zhang, Jian

    2017-09-01

    We propose to use binary Golay sequences in coherent optical orthogonal frequency division multiplexing (CO-OFDM) to improve the long-haul transmission performance. The Golay sequences are generated by binary Reed-Muller codes, which have low peak-to-average power ratio and certain error correction capability. A low-complexity decoding algorithm for the Golay sequences is then proposed to recover the signal. Under same spectral efficiency, the QPSK modulated OFDM with binary Golay sequences coding with and without discrete Fourier transform (DFT) spreading (DFTS-QPSK-GOFDM and QPSK-GOFDM) are compared with the normal BPSK modulated OFDM with and without DFT spreading (DFTS-BPSK-OFDM and BPSK-OFDM) after long-haul transmission. At a 7% forward error correction code threshold (Q2 factor of 8.5 dB), it is shown that DFTS-QPSK-GOFDM outperforms DFTS-BPSK-OFDM by extending the transmission distance by 29% and 18%, in non-dispersion managed and dispersion managed links, respectively.

  1. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer’s Disease

    PubMed Central

    Wang, Xulong; Philip, Vivek M.; Ananda, Guruprasad; White, Charles C.; Malhotra, Ankit; Michalski, Paul J.; Karuturi, Krishna R. Murthy; Chintalapudi, Sumana R.; Acklin, Casey; Sasner, Michael; Bennett, David A.; De Jager, Philip L.; Howell, Gareth R.; Carter, Gregory W.

    2018-01-01

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost, whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized LMM (GLMM) in a Bayesian framework, called Bayes-GLMM. Bayes-GLMM has four major features: (1) support of categorical, binary, and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer’s Disease Sequencing Project. This study contains 570 individuals from 111 families, each with Alzheimer’s disease diagnosed at one of four confidence levels. Using Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer’s disease. Two variants, rs140233081 and rs149372995, lie between PRKAR1B and PDGFA. The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with Alzheimer’s disease-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed-model approach in a Bayesian framework for association studies. PMID:29507048

  2. Composite hot subdwarf binaries - I. The spectroscopically confirmed sdB sample

    NASA Astrophysics Data System (ADS)

    Vos, Joris; Németh, Péter; Vučković, Maja; Østensen, Roy; Parsons, Steven

    2018-01-01

    Hot subdwarf-B (sdB) stars in long-period binaries are found to be on eccentric orbits, even though current binary-evolution theory predicts that these objects are circularized before the onset of Roche lobe overflow (RLOF). To increase our understanding of binary interaction processes during the RLOF phase, we started a long-term observing campaign to study wide sdB binaries. In this paper, we present a sample of composite binary sdBs, and the results of the spectral analysis of nine such systems. The grid search in stellar parameters (GSSP) code is used to derive atmospheric parameters for the cool companions. To cross-check our results and also to characterize the hot subdwarfs, we used the independent XTGRID code, which employs TLUSTY non-local thermodynamic equilibrium models to derive atmospheric parameters for the sdB component and PHOENIX synthetic spectra for the cool companions. The independent GSSP and XTGRID codes are found to show good agreement for three test systems that have atmospheric parameters available in the literature. Based on the rotational velocity of the companions, we make an estimate for the mass accreted during the RLOF phase and the minimum duration of that phase. We find that the mass transfer to the companion is minimal during the subdwarf formation.

  3. Constacyclic codes over the ring F_q+v{F}_q+v2F_q and their applications of constructing new non-binary quantum codes

    NASA Astrophysics Data System (ADS)

    Ma, Fanghui; Gao, Jian; Fu, Fang-Wei

    2018-06-01

    Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.

  4. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE PAGES

    Motl, Patrick M.; Frank, Juhan; Staff, Jan; ...

    2017-03-29

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume "grid" code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code aremore » chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. Here, we also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  5. Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.

    We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. Wemore » test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.« less

  6. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  7. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  8. Multivariate meta-analysis using individual participant data

    PubMed Central

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2016-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  9. Binary Neutron Stars with Arbitrary Spins in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Harald; Tacik, Nick; Foucart, Francois; Haas, Roland; Kaplan, Jeffrey; Muhlberger, Curran; Duez, Matt; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    We present a code to construct initial data for binary neutron star where the stars are rotating. Our code, based on the formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ~ 0 . 1 % . Preliminary evolutions show that spin- and orbit-precession of Neutron stars is well described by post-Newtonian approximation. The neutron stars show quasi-normal mode oscillations at an amplitude which increases with the rotation rate of the stars.

  10. Binary neutron stars with arbitrary spins in numerical relativity

    NASA Astrophysics Data System (ADS)

    Tacik, Nick; Foucart, Francois; Pfeiffer, Harald P.; Haas, Roland; Ossokine, Serguei; Kaplan, Jeff; Muhlberger, Curran; Duez, Matt D.; Kidder, Lawrence E.; Scheel, Mark A.; Szilágyi, Béla

    2015-12-01

    We present a code to construct initial data for binary neutron star systems in which the stars are rotating. Our code, based on a formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We compute the neutron star angular momentum through quasilocal angular momentum integrals. When constructing irrotational binary neutron stars, we find a very small residual dimensionless spin of ˜2 ×10-4 . Evolutions of rotating neutron star binaries show that the magnitude of the stars' angular momentum is conserved, and that the spin and orbit precession of the stars is well described by post-Newtonian approximation. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ˜0.1 % . The neutron stars show quasinormal mode oscillations at an amplitude which increases with the rotation rate of the stars.

  11. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  12. Developing an Array Binary Code Assessment Rubric for Multiple- Choice Questions Using Item Arrays and Binary-Coded Responses

    ERIC Educational Resources Information Center

    Haro, Elizabeth K.; Haro, Luis S.

    2014-01-01

    The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…

  13. Cesium sorption reversibility and kinetics on illite, montmorillonite, and kaolinite

    DOE PAGES

    Durrant, Chad B.; Begg, James D.; Kersting, Annie B.; ...

    2017-08-17

    Understanding sorption and desorption processes is essential to predicting the mobility of radionuclides in the environment. In this study, we investigate adsorption/desorption of cesium in both binary (Cs + one mineral) and ternary (Cs + two minerals) experiments to study component additivity and sorption reversibility over long time periods (500 days). Binary Cs sorption experiments were performed with illite, montmorillonite, and kaolinite in a 5 mM NaCl/0.7 mM NaHCO3 solution (pH 8) and Cs concentration range of 10 –3 to 10 –11 M. The binary sorption experiments were followed by batch desorption experiments. The sorption behavior was modeled with themore » FIT4FD code and the results used to predict desorption behavior. Sorption to montmorillonite and kaolinite was linear over the entire concentration range but sorption to illite was non-linear, indicating the presence of multiple sorption sites. Based on the 14 day batch desorption data, cesium sorption appeared irreversible at high surface loadings in the case of illite but reversible at all concentrations for montmorillonite and kaolinite. Additionally, a novel experimental approach, using a dialysis membrane, was adopted in the ternary experiments, allowing investigation of the effect of a second mineral on Cs desorption from the original mineral. Cs was first sorbed to illite, montmorillonite or kaolinite, then a 3.5–5 kDalton Float-A-Lyzer® dialysis bag with 0.3 g of illite was introduced to each experiment inducing desorption. Nearly complete Cs desorption from kaolinite and montmorillonite was observed over the experiment, consistent with our equilibrium model, indicating complete Cs desorption from these minerals. Results from the long-term ternary experiments show significantly greater Cs desorption compared to the binary desorption experiments. Approximately ~ 45% of Cs desorbed from illite. However, our equilibrium model predicted ~ 65% desorption. Importantly, the data imply that in some cases, slow desorption kinetics rather than permanent fixation may play an important role in apparent irreversible Cs sorption.« less

  14. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  15. Characteristic Evolution and Matching

    NASA Astrophysics Data System (ADS)

    Winicour, Jeffrey

    2012-01-01

    I review the development of numerical evolution codes for general relativity based upon the characteristic initial-value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D-axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black-hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black-hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.

  16. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motl, Patrick M.; Frank, Juhan; Clayton, Geoffrey C.

    2017-04-01

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n  = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to matchmore » as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  17. A flexible surface wetness sensor using a RFID technique.

    PubMed

    Yang, Cheng-Hao; Chien, Jui-Hung; Wang, Bo-Yan; Chen, Ping-Hei; Lee, Da-Sheng

    2008-02-01

    This paper presents a flexible wetness sensor whose detection signal, converted to a binary code, is transmitted through radio-frequency (RF) waves from a radio-frequency identification integrated circuit (RFID IC) to a remote reader. The flexible sensor, with a fixed operating frequency of 13.56 MHz, contains a RFID IC and a sensor circuit that is fabricated on a flexible printed circuit board (FPCB) using a Micro-Electro-Mechanical-System (MEMS) process. The sensor circuit contains a comb-shaped sensing area surrounded by an octagonal antenna with a width of 2.7 cm. The binary code transmitted from the RFIC to the reader changes if the surface conditions of the detector surface changes from dry to wet. This variation in the binary code can be observed on a digital oscilloscope connected to the reader.

  18. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. ATLAS offline software performance monitoring and optimization

    NASA Astrophysics Data System (ADS)

    Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration

    2014-06-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.

  20. The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Kochoska, Angela; Prša, Andrej; Horvat, Martin

    2018-01-01

    Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.

  1. Fingerprinting Communication and Computation on HPC Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less

  2. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  3. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  4. Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles.

    PubMed

    Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin

    2009-01-01

    Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.

  5. Optimizing ATLAS code with different profilers

    NASA Astrophysics Data System (ADS)

    Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.

    2014-06-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

  6. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  7. Finger Vein Recognition Based on Local Directional Code

    PubMed Central

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  8. Finger vein recognition based on local directional code.

    PubMed

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-11-05

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.

  9. High-precision broad-band linear polarimetry of early-type binaries. I. Discovery of variable, phase-locked polarization in HD 48099

    NASA Astrophysics Data System (ADS)

    Berdyugin, A.; Piirola, V.; Sadegi, S.; Tsygankov, S.; Sakanoi, T.; Kagitani, M.; Yoneda, M.; Okano, S.; Poutanen, J.

    2016-06-01

    Aims: We investigate the structure of the O-type binary system HD 48099 by measuring linear polarization that arises due to light scattering process. High-precison polarimetry provides independent estimates of the orbital parameters and gives important information on the properties of the system. Methods: Linear polarization measurements of HD 48099 in the B, V and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained with the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and T60 (Haleakala, Hawaii, USA) remotely controlled telescopes during 31 observing nights. Polarimetry in the optical wavelengths has been complemented by observations in the X-rays with the Swift space observatory. Results: Optical polarimetry revealed small intrinsic polarization in HD 48099 with ~0.1% peak to peak variation over the orbital period of 3.08 d. The variability pattern is typical for binary systems, showing strong second harmonic of the orbital period. We apply our model code for the electron scattering in the circumstellar matter to put constraints on the system geometry. A good model fit is obtained for scattering of light on a cloud produced by the colliding stellar winds. The geometry of the cloud, with a broad distribution of scattering particles away from the orbital plane, helps in constraining the (low) orbital inclination. We derive from the polarization data the inclination I = 17° ± 2° and the longitude of the ascending node Ω = 82° ± 1° of the binary orbit. The available X-ray data provide additional evidence for the existence of the colliding stellar winds in the system. Another possible source of the polarized light could be scattering from the stellar photospheres. The models with circumstellar envelopes, or matter confined to the orbital plane, do not provide good constraints on the low inclination, better than I ≤ 27°, as is already suggested by the absence of eclipses. The polarization data for HD 48099 are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A92

  10. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  11. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  12. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  13. Superluminal Labview Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheat, Robert; Marksteiner, Quinn; Quenzer, Jonathan

    2012-03-26

    This labview code is used to set the phase and amplitudes on the 72 antenna of the superluminal machine, and to map out the radiation patter from the superluminal antenna.Each antenna radiates a modulated signal consisting of two separate frequencies, in the range of 2 GHz to 2.8 GHz. The phases and amplitudes from each antenna are controlled by a pair of AD8349 vector modulators (VMs). These VMs set the phase and amplitude of a high frequency signal using a set of four DC inputs, which are controlled by Linear Technologies LTC1990 digital to analog converters (DACs). The labview codemore » controls these DACs through an 8051 microcontroller.This code also monitors the phases and amplitudes of the 72 channels. Near each antenna, there is a coupler that channels a portion of the power into a binary network. Through a labview controlled switching array, any of the 72 coupled signals can be channeled in to the Tektronix TDS 7404 digital oscilloscope. Then the labview code takes an FFT of the signal, and compares it to the FFT of a reference signal in the oscilloscope to determine the magnitude and phase of each sideband of the signal. The code compensates for phase and amplitude errors introduced by differences in cable lengths.The labview code sets each of the 72 elements to a user determined phase and amplitude. For each element, the code runs an iterative procedure, where it adjusts the DACs until the correct phases and amplitudes have been reached.« less

  14. Compact binary hashing for music retrieval

    NASA Astrophysics Data System (ADS)

    Seo, Jin S.

    2014-03-01

    With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.

  15. A new technique for calculations of binary stellar evolution, with application to magnetic braking

    NASA Technical Reports Server (NTRS)

    Rappaport, S.; Joss, P. C.; Verbunt, F.

    1983-01-01

    The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.

  16. Be discs in coplanar circular binaries: Phase-locked variations of emission lines

    NASA Astrophysics Data System (ADS)

    Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo

    2018-01-01

    In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.

  17. Gene-specific cell labeling using MiMIC transposons

    PubMed Central

    Gnerer, Joshua P.; Venken, Koen J. T.; Dierick, Herman A.

    2015-01-01

    Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. PMID:25712101

  18. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    PubMed

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  19. Resettable binary latch mechanism for use with paraffin linear motors

    NASA Technical Reports Server (NTRS)

    Maus, Daryl; Tibbitts, Scott

    1991-01-01

    A new resettable Binary Latch Mechanism was developed utilizing a paraffin actuator as the motor. This linear actuator alternately latches between extended and retracted positions, maintaining either position with zero power consumption. The design evolution and kinematics of the latch mechanism are presented, as well as the development problems and lessons that were learned.

  20. Multivariate meta-analysis using individual participant data.

    PubMed

    Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R

    2015-06-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  1. Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications

    NASA Astrophysics Data System (ADS)

    Kowsari, Kamram

    2015-03-01

    searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.

  2. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  3. Development of a Physiologically Based Pharmacokinetic and Pharmacodynamic Model to Determine Dosimetry and Cholinesterase Inhibition for a Binary Mixture of Chlorpyrifos and Diazinon in the Rat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timchalk, Chuck; Poet, Torka S.

    2008-05-01

    Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models have been developed and validated for the organophosphorus (OP) insecticides chlorpyrifos (CPF) and diazinon (DZN). Based on similar pharmacokinetic and mode of action properties it is anticipated that these OPs could interact at a number of important metabolic steps including: CYP450 mediated activation/detoxification, and blood/tissue cholinesterase (ChE) binding/inhibition. We developed a binary PBPK/PD model for CPF, DZN and their metabolites based on previously published models for the individual insecticides. The metabolic interactions (CYP450) between CPF and DZN were evaluated in vitro and suggests that CPF is more substantially metabolized to its oxon metabolite than ismore » DZN. These data are consistent with their observed in vivo relative potency (CPF>DZN). Each insecticide inhibited the other’s in vitro metabolism in a concentration-dependent manner. The PBPK model code used to described the metabolism of CPF and DZN was modified to reflect the type of inhibition kinetics (i.e. competitive vs. non-competitive). The binary model was then evaluated against previously published rodent dosimetry and ChE inhibition data for the mixture. The PBPK/PD model simulations of the acute oral exposure to single- (15 mg/kg) vs. binary-mixtures (15+15 mg/kg) of CFP and DZN at this lower dose resulted in no differences in the predicted pharmacokinetics of either the parent OPs or their respective metabolites; whereas, a binary oral dose of CPF+DZN at 60+60 mg/kg did result in observable changes in the DZN pharmacokinetics. Cmax was more reasonably fit by modifying the absorption parameters. It is anticipated that at low environmentally relevant binary doses, most likely to be encountered in occupational or environmental related exposures, that the pharmacokinetics are expected to be linear, and ChE inhibition dose-additive.« less

  4. Chopper-stabilized phase detector

    NASA Technical Reports Server (NTRS)

    Hopkins, P. M.

    1978-01-01

    Phase-detector circuit for binary-tracking loops and other binary-data acquisition systems minimizes effects of drift, gain imbalance, and voltage offset in detector circuitry. Input signal passes simultaneously through two channels where it is mixed with early and late codes that are alternately switched between channels. Code switching is synchronized with polarity switching of detector output of each channel so that each channel uses each detector for half time. Net result is that dc offset errors are canceled, and effect of gain imbalance is simply change in sensitivity.

  5. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  6. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  7. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  8. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  9. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  10. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  11. Breakdown and Limit of Continuum Diffusion Velocity for Binary Gas Mixtures from Direct Simulation

    NASA Astrophysics Data System (ADS)

    Martin, Robert Scott; Najmabadi, Farrokh

    2011-05-01

    This work investigates the breakdown of the continuum relations for diffusion velocity in inert binary gas mixtures. Values of the relative diffusion velocities for components of a gas mixture may be calculated using of Chapman-Enskog theory and occur not only due to concentration gradients, but also pressure and temperature gradients in the flow as described by Hirschfelder. Because Chapman-Enskog theory employs a linear perturbation around equilibrium, it is expected to break down when the velocity distribution deviates significantly from equilibrium. This breakdown of the overall flow has long been an area of interest in rarefied gas dynamics. By comparing the continuum values to results from Bird's DS2V Monte Carlo code, we propose a new limit on the continuum approach specific to binary gases. To remove the confounding influence of an inconsistent molecular model, we also present the application of the variable hard sphere (VSS) model used in DS2V to the continuum diffusion velocity calculation. Fitting sample asymptotic curves to the breakdown, a limit, Vmax, that is a fraction of an analytically derived limit resulting from the kinetic temperature of the mixture is proposed. With an expected deviation of only 2% between the physical values and continuum calculations within ±Vmax/4, we suggest this as a conservative estimate on the range of applicability for the continuum theory.

  12. Coding efficiency of AVS 2.0 for CBAC and CABAC engines

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik

    2015-12-01

    In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.

  13. Synergism and Combinatorial Coding for Binary Odor Mixture Perception in Drosophila

    PubMed Central

    Chakraborty, Tuhin Subhra; Siddiqi, Obaid

    2016-01-01

    Most odors in the natural environment are mixtures of several compounds. Olfactory receptors housed in the olfactory sensory neurons detect these odors and transmit the information to the brain, leading to decision-making. But whether the olfactory system detects the ingredients of a mixture separately or treats mixtures as different entities is not well understood. Using Drosophila melanogaster as a model system, we have demonstrated that fruit flies perceive binary odor mixtures in a manner that is heavily dependent on both the proportion and the degree of dilution of the components, suggesting a combinatorial coding at the peripheral level. This coding strategy appears to be receptor specific and is independent of interneuronal interactions. PMID:27588303

  14. Context-Aware Local Binary Feature Learning for Face Recognition.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2018-05-01

    In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.

  15. Gene-specific cell labeling using MiMIC transposons.

    PubMed

    Gnerer, Joshua P; Venken, Koen J T; Dierick, Herman A

    2015-04-30

    Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. VizieR Online Data Catalog: ASAS, NSVS, and LINEAR detached eclipsing binaries (Lee, 2015)

    NASA Astrophysics Data System (ADS)

    Lee, C.-H.

    2016-04-01

    We follow the approach of Devor et al. (2008AJ....135..850D, Cat. J/AJ/135/850) to analyse the LC from ASAS (Pojmanski et al., Cat. II/264, NSVS (Wozniak et al., 2004AJ....127.2436W, and LINEAR (Palaversa et al., Cat. J/AJ/146/101) and extract the physical properties of the eclipsing binaries. (3 data files).

  17. Binary Multidimensional Scaling for Hashing.

    PubMed

    Huang, Yameng; Lin, Zhouchen

    2017-10-04

    Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.

  18. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  19. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  20. Coding/modulation trade-offs for Shuttle wideband data links

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.; Trumpis, B. D.

    1974-01-01

    This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.

  1. Direct-Sequence Spread Spectrum System

    DTIC Science & Technology

    1990-06-01

    by directly modulating a conventional narrowband frequency-modulated (FM) carrier by a high rate digital code. The direct modulation is binary phase ...specification of the DSSS system will not be developed. The results of the evaluation phase of this research will be compared against theoretical...spread spectrum is called binary phase -shift keying 19 (BPSK). BPSK is a modulation in which a binary Ŕ" represents a 0-degree relative phase

  2. Numerical Simulations of Dynamical Mass Transfer in Binaries

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Frank, J.; Tohline, J. E.

    1999-05-01

    We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  3. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  4. All-optical 4-bit binary to binary coded decimal converter with the help of semiconductor optical amplifier-assisted Sagnac switch

    NASA Astrophysics Data System (ADS)

    Bhattachryya, Arunava; Kumar Gayen, Dilip; Chattopadhyay, Tanay

    2013-04-01

    All-optical 4-bit binary to binary coded decimal (BCD) converter has been proposed and described, with the help of semiconductor optical amplifier (SOA)-assisted Sagnac interferometric switches in this manuscript. The paper describes all-optical conversion scheme using a set of all-optical switches. BCD is common in computer systems that display numeric values, especially in those consisting solely of digital logic with no microprocessor. In many personal computers, the basic input/output system (BIOS) keep the date and time in BCD format. The operations of the circuit are studied theoretically and analyzed through numerical simulations. The model accounts for the SOA small signal gain, line-width enhancement factor and carrier lifetime, the switching pulse energy and width, and the Sagnac loop asymmetry. By undertaking a detailed numerical simulation the influence of these key parameters on the metrics that determine the quality of switching is thoroughly investigated.

  5. Thermal Timescale Mass Transfer In Binary Population Synthesis

    NASA Astrophysics Data System (ADS)

    Justham, S.; Kolb, U.

    2004-07-01

    Studies of binary evolution have, until recently, neglected thermal timescale mass transfer (TTMT). Recent work has suggested that this previously poorly studied area is crucial in the understanding of systems across the compact binary spectrum. We use the state-of-the-art binary population synthesis code BiSEPS (Willems and Kolb, 2002, MNRAS 337 1004-1016). However, the present treatment of TTMT is incomplete due to the nonlinear behaviour of stars in their departure from gravothermal `equilibrium'. Here we show work that should update the ultrafast stellar evolution algorithms within BiSEPS to make it the first pseudo-analytic code that can follow TTMT properly. We have generated fits to a set of over 300 Case B TTMT sequences with a range of intermediate-mass donors. These fits produce very good first approximations to both HR diagrams and mass-transfer rates (see figures 1 and 2), which we later hope to improve and extend. They are already a significant improvement over the previous fits.

  6. A Loader for Executing Multi-Binary Applications on the Thinking Machines CM-5: It's Not Just for SPMD Anymore

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.

    1995-01-01

    The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.

  7. How can we make stable linear monoatomic chains? Gold-cesium binary subnanowires as an example of a charge-transfer-driven approach to alloying.

    PubMed

    Choi, Young Cheol; Lee, Han Myoung; Kim, Woo Youn; Kwon, S K; Nautiyal, Tashi; Cheng, Da-Yong; Vishwanathan, K; Kim, Kwang S

    2007-02-16

    On the basis of first-principles calculations of clusters and one dimensional infinitely long subnanowires of the binary systems, we find that alkali-noble metal alloy wires show better linearity and stability than either pure alkali metal or noble metal wires. The enhanced alternating charge buildup on atoms by charge transfer helps the atoms line up straight. The cesium doped gold wires showing significant charge transfer from cesium to gold can be stabilized as linear or circular monoatomic chains.

  8. Can Binary Population Synthesis Models Be Tested With Hot Subdwarfs ?

    NASA Astrophysics Data System (ADS)

    Kopparapu, Ravi Kumar; Wade, R. A.; O'Shaughnessy, R.

    2007-12-01

    Models of binary star interactions have been successful in explaining the origin of field hot subdwarf (sdB) stars in short period systems. The hydrogen envelopes around these core He-burning stars are removed in a "common envelope" evolutionary phase. Reasonably clean samples of short-period sdB+WD or sdB+dM systems exist, that allow the common envelope ejection efficiency to be estimated for wider use in binary population synthesis (BPS) codes. About one-third of known sdB stars, however, are found in longer-period systems with a cool G or K star companion. These systems may have formed through Roche-lobe overflow (RLOF) mass transfer from the present sdB to its companion. They have received less attention, because the existing catalogues are believed to have severe selection biases against these systems, and because their long, slow orbits are difficult to measure. Are these known sdB+cool systems worth intense observational effort? That is, can they be used to make a valid and useful test of the RLOF process in BPS codes? We use the Binary Stellar Evolution (BSE) code of Hurley et al. (2002), mapping sets of initial binaries into present-day binaries that include sdBs, and distinguishing "observable" sdBs from "hidden" ones. We aim to find out whether (1) the existing catalogues of sdBs are sufficiently fair samples of the kinds of sdB binaries that theory predicts, to allow testing or refinement of RLOF models; or instead whether (2) large predicted hidden populations mandate the construction of new catalogues, perhaps using wide-field imaging surveys such as 2MASS, SDSS, and Galex. This work has been partially supported by NASA grant NNG05GE11G and NSF grants PHY 03-26281, PHY 06-00953 and PHY 06-53462. This work is also supported by the Center for Gravitational Wave Physics, which is supported by the National Science Foundation under cooperative agreement PHY 01-14375.

  9. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  10. Formation of the first three gravitational-wave observations through isolated binary evolution

    PubMed Central

    Stevenson, Simon; Vigna-Gómez, Alejandro; Mandel, Ilya; Barrett, Jim W.; Neijssel, Coenraad J.; Perkins, David; de Mink, Selma E.

    2017-01-01

    During its first four months of taking data, Advanced LIGO has detected gravitational waves from two binary black hole mergers, GW150914 and GW151226, along with the statistically less significant binary black hole merger candidate LVT151012. Here we use the rapid binary population synthesis code COMPAS to show that all three events can be explained by a single evolutionary channel—classical isolated binary evolution via mass transfer including a common envelope phase. We show all three events could have formed in low-metallicity environments (Z=0.001) from progenitor binaries with typical total masses ≳160M⊙, ≳60M⊙ and ≳90M⊙, for GW150914, GW151226 and LVT151012, respectively. PMID:28378739

  11. Linear analysis of the evolution of nearly polar low-mass circumbinary discs

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen H.; Martin, Rebecca G.

    2018-01-01

    In a recent paper Martin & Lubow showed through simulations that an initially tilted disc around an eccentric binary can evolve to polar alignment in which the disc lies perpendicular to the binary orbital plane. We apply linear theory to show both analytically and numerically that a nearly polar aligned low-mass circumbinary disc evolves to polar alignment and determine the alignment time-scale. Significant disc evolution towards the polar state around moderately eccentric binaries can occur for typical protostellar disc parameters in less than a typical disc lifetime for binaries with orbital periods of order 100 yr or less. Resonant torques are much less effective at truncating the inner parts of circumbinary polar discs than the inner parts of coplanar discs. For polar discs, they vanish for a binary eccentricity of unity. The results agree with the simulations in showing that discs can evolve to a polar state. Circumbinary planets may then form in such discs and reside on polar orbits.

  12. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  13. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  14. Analysis and Defense of Vulnerabilities in Binary Code

    DTIC Science & Technology

    2008-09-29

    language . We demonstrate our techniques by automatically generating input filters from vulnerable binary programs. vi Acknowledgments I thank my wife, family...21 2.2 The Vine Intermediate Language . . . . . . . . . . . . . . . . . . . . . . 21 ix 2.2.1 Normalized Memory...The Traditional Weakest Precondition Semantics . . . . . . . . . . . . . 44 3.2.1 The Guarded Command Language . . . . . . . . . . . . . . . . . 44

  15. Distribution of compact object mergers around galaxies

    NASA Astrophysics Data System (ADS)

    Bulik, T.; Belczyński, K.; Zbijewski, W.

    1999-09-01

    Compact object mergers are one of the favoured models of gamma ray bursts (GRB). Using a binary population synthesis code we calculate properties of the population of compact object binaries; e.g. lifetimes and velocities. We then propagate them in galactic potentials and find their distribution in relation to the host.

  16. A Biosequence-based Approach to Software Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oehmen, Christopher S.; Peterson, Elena S.; Phillips, Aaron R.

    For many applications, it is desirable to have some process for recognizing when software binaries are closely related without relying on them to be identical or have identical segments. Some examples include monitoring utilization of high performance computing centers or service clouds, detecting freeware in licensed code, and enforcing application whitelists. But doing so in a dynamic environment is a nontrivial task because most approaches to software similarity require extensive and time-consuming analysis of a binary, or they fail to recognize executables that are similar but nonidentical. Presented herein is a novel biosequence-based method for quantifying similarity of executable binaries.more » Using this method, it is shown in an example application on large-scale multi-author codes that 1) the biosequence-based method has a statistical performance in recognizing and distinguishing between a collection of real-world high performance computing applications better than 90% of ideal; and 2) an example of using family tree analysis to tune identification for a code subfamily can achieve better than 99% of ideal performance.« less

  17. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  18. Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results

    NASA Astrophysics Data System (ADS)

    Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.

    2005-12-01

    We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.

  19. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  20. Advances in Black-Hole Mergers: Spins and Unequal Masses

    NASA Technical Reports Server (NTRS)

    Kelly, Bernard

    2007-01-01

    The last two years have seen incredible development in numerical relativity: from fractions of an orbit, evolutions of an equal-mass binary have reached multiple orbits, and convergent gravitational waveforms have been produced from several research groups and numerical codes. We are now able to move our attention from pure numerics to astrophysics, and address scenarios relevant to current and future gravitational-wave detectors.Over the last 12 months at NASA Goddard, we have extended the accuracy of our Hahn-Dol code, and used it to move toward these goals. We have achieved high-accuracy simulations of black-hole binaries of low initial eccentricity, with enough orbits of inspiral before merger to allow us to produce hybrid waveforms that reflect accurately the entire lifetime of the BH binary. We are extending this work, looking at the effects of unequal masses and spins.

  1. A new approach for modeling gravitational radiation from the inspiral of two neutron stars

    NASA Astrophysics Data System (ADS)

    Luke, Stephen A.

    In this dissertation, a new method of applying the ADM formalism of general relativity to model the gravitational radiation emitted from the realistic inspiral of a neutron star binary is described. A description of the conformally flat condition (CFC) is summarized, and the ADM equations are solved by use of the CFC approach for a neutron star binary. The advantages and limitations of this approach are discussed, and the need for a more accurate improvement to this approach is described. To address this need, a linearized perturbation of the CFC spatial three metric is then introduced. The general relativistic hydrodynamic equations are then allowed to evolve against this basis under the assumption that the first-order corrections to the hydrodynamic variables are negligible compared to their CFC values. As a first approximation, the linear corrections to the conformal factor, lapse function, and shift vector are also assumed to be small compared to the extrinsic curvature and the three metric. A boundary matching method is then introduced as a way of computing the gravitational radiation of this relativistic system without use of the multipole expansion as employed by earlier applications of the CFC approach. It is assumed that at a location far from the source, the three metric is accurately described by a linear correction to Minkowski spacetime. The two polarizations of gravitational radiation can then be computed at that point in terms of the linearized correction to the metric. The evolution equations obtained from the linearized perturbative correction to the CFC approach and the method for recovery of the gravity wave signal are then tested by use of a three-dimensional numerical simulation. This code is used to compute the gravity wave signal emitted a pair of equal mass neutron stars in quasi-stable circular orbits at a point early in their inspiral phase. From this simple numerical analysis, the correct general trend of gravitational radiation is recovered. Comparisons with (5/2) post-Newtonian solutions show a similar gravitational waveform, although inaccuracies are still found to exist from this computation. Finally, several areas for improvement and potential future applications of this technique are discussed.

  2. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  3. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  4. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  5. Doubled-lined eclipsing binary system KIC~2306740 with pulsating component discovered from Kepler space photometry

    NASA Astrophysics Data System (ADS)

    Yakut, Kadri

    2015-08-01

    We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.

  6. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  7. Contamination of RR Lyrae stars from Binary Evolution Pulsators

    NASA Astrophysics Data System (ADS)

    Karczmarek, Paulina; Pietrzyński, Grzegorz; Belczyński, Krzysztof; Stępień, Kazimierz; Wiktorowicz, Grzegorz; Iłkiewicz, Krystian

    2016-06-01

    Binary Evolution Pulsator (BEP) is an extremely low-mass member of a binary system, which pulsates as a result of a former mass transfer to its companion. BEP mimics RR Lyrae-type pulsations but has different internal structure and evolution history. We present possible evolution channels to produce BEPs, and evaluate the contamination value, i.e. how many objects classified as RR Lyrae stars can be undetected BEPs. In this analysis we use population synthesis code StarTrack.

  8. Photometric investigation of a very short period W UMa-type binary - Does CE Leonis have a large superluminous area?

    NASA Technical Reports Server (NTRS)

    Samec, Ronald G.; Su, Wen; Terrell, Dirk; Hube, Douglas P.

    1993-01-01

    A complete photometric analysis of BVRI Johnson-Cousins photometry of the high northern latitude galactic variable, CE Leo is presented. These observations were taken at Kitt Peak National Observatory on May 31, 1989-June 7, 1989. Three new precise epochs of minimum light were determined and a linear and a quadratic ephemeris were computed from these and previous data covering 28 years of observation. The light curves reveal that the system undergoes a brief 20 min totality in the primary eclipse, indicating that CE Leo is a W UMa W-type binary. A systemic velocity of about -40 km/s was determined. Standard magnitudes were found and a simultaneous solution of the B, V, R, I light curves was computed using the new Wilson-Devinney synthetic light curve code which has the capability of automatically adjusting star spots. The solution indicates that the system consists of two early K-type dwarfs in marginal contact with a fill-out factor less than 3 percent. Evidence for the presence of a large (45 deg radius) superluminous area on the cooler component is given.

  9. Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650

    NASA Astrophysics Data System (ADS)

    Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.

    2018-04-01

    We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.

  10. Simulations of binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2017-01-01

    Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.

  11. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  12. Error Correcting Codes and Related Designs

    DTIC Science & Technology

    1990-09-30

    Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group

  13. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  14. High-precision broad-band linear polarimetry of early-type binaries. II. Variable, phase-locked polarization in triple Algol-type system λ Tauri

    NASA Astrophysics Data System (ADS)

    Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M.

    2018-03-01

    Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i, inferred from polarimetry, agrees with the previously made conclusion on the semi-detached nature of the inner binary, whose secondary component is filling its Roche lobe. The non-periodic scatter, which is also present in the polarization data, can be interpreted as being due to sporadic changes in the mass transfer rate. The polarization data for λ Tauri are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A69

  15. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    PubMed

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  16. A parameter estimation algorithm for LFM/BPSK hybrid modulated signal intercepted by Nyquist folding receiver

    NASA Astrophysics Data System (ADS)

    Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin

    2016-12-01

    Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.

  17. MONTE CARLO POPULATION SYNTHESIS OF POST-COMMON-ENVELOPE WHITE DWARF BINARIES AND TYPE Ia SUPERNOVA RATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong

    Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less

  18. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  19. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  20. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  1. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  2. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    NASA Astrophysics Data System (ADS)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  3. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  4. Polar alignment of a protoplanetary disc around an eccentric binary II: Effect of binary and disc parameters

    NASA Astrophysics Data System (ADS)

    Martin, Rebecca G.; Lubow, Stephen H.

    2018-06-01

    In a recent paper Martin & Lubow showed that a circumbinary disc around an eccentric binary can undergo damped nodal oscillations that lead to the polar (perpendicular) alignment of the disc relative to the binary orbit. The disc angular momentum vector aligns to the eccentricity vector of the binary. We explore the robustness of this mechanism for a low mass disc (0.001 of the binary mass) and its dependence on system parameters by means of hydrodynamic disc simulations. We describe how the evolution depends upon the disc viscosity, temperature, size, binary mass ratio, orbital eccentricity and inclination. We compare results with predictions of linear theory. We show that polar alignment of a low mass disc may occur over a wide range of binary-disc parameters. We discuss the application of our results to the formation of planetary systems around eccentric binary stars.

  5. The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System

    NASA Astrophysics Data System (ADS)

    Suen, Wai-Mo

    1998-04-01

    NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.

  6. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  7. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  8. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  9. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  10. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    ERIC Educational Resources Information Center

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  11. Dynamic fisheye grids for binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Noble, Scott C.

    2014-03-01

    We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.

  12. Emission-line diagnostics of nearby H II regions including interacting binary populations

    NASA Astrophysics Data System (ADS)

    Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.

    2018-06-01

    We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.

  13. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  14. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    PubMed

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.

  15. Cognitive Airborne Networking: Self-Aware Communications via Sensing, Adaptation, and Cross-Layer Optimization

    DTIC Science & Technology

    2011-03-01

    Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans. Commun...vol. 51, pp. 48-51, Jan. 2003. [99] C. Ding, M. Golin, and T. Klφve, “Meeting the Welch and Karystinos-Pados bounds on DS - CDMA binary signature sets...Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [100] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA

  16. Insertion, Detection, and Extraction of Messages Hidden by Optimal Multi-Signature Spread Spectrum Means

    DTIC Science & Technology

    2015-07-09

    49, pp. 873-885, Apr. 2003. [23] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary...bounds on DS - CDMA binary signature sets,” Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [25] V. P. Ipatov, “On the Karystinos-Pados...bounds and optimal binary DS - CDMA signature ensembles,” IEEE Commun. Letters, vol. 8, pp. 81-83, Feb. 2004. [26] G. N. Karystinos and D. A. Pados

  17. A Case Study Of Organic Dirac Materials -

    NASA Astrophysics Data System (ADS)

    Commeau, Benjamin; Geilhufe, Matthias; Fernando, Gayanath; Balatsky, Alexander

    Dirac Materials are characterized by linear band crossings within the electronic band structure. Most research of Dirac materials has been dedicated towards inorganic materials, e.g., binary chalcogenides as toplogical insulators, the Weyl semimetal TaAs or graphene. The purpose of this study is to investigate the formation of Dirac points in organic materials under pressure and mechanical strain. We study multiple structural phases of the organic charge-transfer salt (BEDT-TTF)2I3. We numerically calculate the relaxed band structure near the Fermi level along different k-space directions. Once the relaxed ion structure is obtained, we pick different cell parameters to shrink and investigate the changes in the band structure. We discuss band structure degeneracies protected by crystalline and other symmetries, if any. Quantum Espresso and VASP codes were used to calculate and validate our results.

  18. Fast and low-cost structured light pattern sequence projection.

    PubMed

    Wissmann, Patrick; Forster, Frank; Schmitt, Robert

    2011-11-21

    We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μm pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America

  19. The Coding of Biological Information: From Nucleotide Sequence to Protein Recognition

    NASA Astrophysics Data System (ADS)

    Štambuk, Nikola

    The paper reviews the classic results of Swanson, Dayhoff, Grantham, Blalock and Root-Bernstein, which link genetic code nucleotide patterns to the protein structure, evolution and molecular recognition. Symbolic representation of the binary addresses defining particular nucleotide and amino acid properties is discussed, with consideration of: structure and metric of the code, direct correspondence between amino acid and nucleotide information, and molecular recognition of the interacting protein motifs coded by the complementary DNA and RNA strands.

  20. PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns

    ERIC Educational Resources Information Center

    Paterson, J. H.; Cheng, K. F.; Haddow, J.

    2009-01-01

    PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…

  1. Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.

    PubMed

    Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang

    2018-09-01

    Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.

  2. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  4. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  5. Improved magnetic encoding device and method for making the same. [Patent application

    DOEpatents

    Fox, R.J.

    A magnetic encoding device and method for making the same are provided for use as magnetic storage media in identification control applications that give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications. 7 figures.

  6. Method for making an improved magnetic encoding device

    DOEpatents

    Fox, Richard J.

    1981-01-01

    A magnetic encoding device and method for making the same are provided for use as magnetic storage mediums in identification control applications which give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications.

  7. A Structural Molar Volume Model for Oxide Melts Part I: Li2O-Na2O-K2O-MgO-CaO-MnO-PbO-Al2O3-SiO2 Melts—Binary Systems

    NASA Astrophysics Data System (ADS)

    Thibodeau, Eric; Gheribi, Aimen E.; Jung, In-Ho

    2016-04-01

    A structural molar volume model was developed to accurately reproduce the molar volume of molten oxides. As the non-linearity of molar volume is related to the change in structure of molten oxides, the silicate tetrahedral Q-species, calculated from the modified quasichemical model with an optimized thermodynamic database, were used as basic structural units in the present model. Experimental molar volume data for unary and binary melts in the Li2O-Na2O-K2O-MgO-CaO-MnO-PbO-Al2O3-SiO2 system were critically evaluated. The molar volumes of unary oxide components and binary Q-species, which are model parameters of the present structural model, were determined to accurately reproduce the experimental data across the entire binary composition in a wide range of temperatures. The non-linear behavior of molar volume and thermal expansivity of binary melt depending on SiO2 content are well reproduced by the present model.

  8. Defragged Binary I Ching Genetic Code Chromosomes Compared to Nirenberg’s and Transformed into Rotating 2D Circles and Squares and into a 3D 100% Symmetrical Tetrahedron Coupled to a Functional One to Discern Start From Non-Start Methionines through a Stella Octangula

    PubMed Central

    Castro-Chavez, Fernando

    2012-01-01

    Background Three binary representations of the genetic code according to the ancient I Ching of Fu-Xi will be presented, depending on their defragging capabilities by pairing based on three biochemical properties of the nucleic acids: H-bonds, Purine/Pyrimidine rings, and the Keto-enol/Amino-imino tautomerism, yielding the last pair a 32/32 single-strand self-annealed genetic code and I Ching tables. Methods Our working tool is the ancient binary I Ching's resulting genetic code chromosomes defragged by vertical and by horizontal pairing, reverse engineered into non-binaries of 2D rotating 4×4×4 circles and 8×8 squares and into one 3D 100% symmetrical 16×4 tetrahedron coupled to a functional tetrahedron with apical signaling and central hydrophobicity (codon formula: 4[1(1)+1(3)+1(4)+4(2)]; 5:5, 6:6 in man) forming a stella octangula, and compared to Nirenberg's 16×4 codon table (1965) pairing the first two nucleotides of the 64 codons in axis y. Results One horizontal and one vertical defragging had the start Met at the center. Two, both horizontal and vertical pairings produced two pairs of 2×8×4 genetic code chromosomes naturally arranged (M and I), rearranged by semi-introversion of central purines or pyrimidines (M' and I') and by clustering hydrophobic amino acids; their quasi-identity was disrupted by amino acids with odd codons (Met and Tyr pairing to Ile and TGA Stop); in all instances, the 64-grid 90° rotational ability was restored. Conclusions We defragged three I Ching representations of the genetic code while emphasizing Nirenberg's historical finding. The synthetic genetic code chromosomes obtained reflect the protective strategy of enzymes with a similar function, having both humans and mammals a biased G-C dominance of three H-bonds in the third nucleotide of their most used codons per amino acid, as seen in one chromosome of the i, M and M' genetic codes, while a two H-bond A-T dominance was found in their complementary chromosome, as seen in invertebrates and plants. The reverse engineering of chromosome I' into 2D rotating circles and squares was undertaken, yielding a 100% symmetrical 3D geometry which was coupled to a previously obtained genetic code tetrahedron in order to differentiate the start methionine from the methionine that is acting as a codifying non-start codon. PMID:23431415

  9. HS 0705+6700: a New Eclipsing sdB Binary

    NASA Astrophysics Data System (ADS)

    Drechsel, H.; Heber, U.; Napiwotzki, R.; Ostensen, R.; Solheim, J.-E.; Deetjen, J.; Schuh, S.

    HS 0705+6700 is a newly discovered eclipsing sdB binary system consisting of an sdB primary and a cool secondary main sequence star. CCD photometry obtained in October and November 2000 with the 2.5m Nordic (NOT) telescope (La Palma, Tenerife) in the B passband and with the 2.2m Calar Alto telescope (CAFOS, R filter) yielded eclipse light curves with complete orbital phase coverage at high time resolution. A periodogram analysis of 12 primary minimum times distributed over the time span from October 2000 to March 2001 allowed to derive the following exact period and linear ephemeris: prim. min. = HJD 2451822.759782(22) + 0.09564665(39) ṡ E A total of 15 spectra taken with the 3.5m Calar Alto telescope (TWIN spectrograph) on March 11-12, 2001, were used to establish the radial velocity curve of the primary star (K1 = 85.8 km/s) , and to determine its basic atmospheric parameters (Teff = 29300 K, log g = 5.47). The B and R light curves were solved using our Wilson-Devinney based light curve analysis code MORO (Drechsel et al. 1995, A&A 294, 723). The best fit solution yielded exact system parameters consistent with the spectroscopic results. Detailed results will be published elsewhere (Drechsel et al. 2001, A&A, in preparation).

  10. Minidisks in Binary Black Hole Accretion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Geoffrey; MacFadyen, Andrew, E-mail: gsr257@nyu.edu

    Newtonian simulations have demonstrated that accretion onto binary black holes produces accretion disks around each black hole (“minidisks”), fed by gas streams flowing through the circumbinary cavity from the surrounding circumbinary disk. We study the dynamics and radiation of an individual black hole minidisk using 2D hydrodynamical simulations performed with a new general relativistic version of the moving-mesh code Disco. We introduce a comoving energy variable that enables highly accurate integration of these high Mach number flows. Tidally induced spiral shock waves are excited in the disk and propagate through the innermost stable circular orbit, providing a Reynolds stress thatmore » causes efficient accretion by purely hydrodynamic means and producing a radiative signature brighter in hard X-rays than the Novikov–Thorne model. Disk cooling is provided by a local blackbody prescription that allows the disk to evolve self-consistently to a temperature profile where hydrodynamic heating is balanced by radiative cooling. We find that the spiral shock structure is in agreement with the relativistic dispersion relation for tightly wound linear waves. We measure the shock-induced dissipation and find outward angular momentum transport corresponding to an effective alpha parameter of order 0.01. We perform ray-tracing image calculations from the simulations to produce theoretical minidisk spectra and viewing-angle-dependent images for comparison with observations.« less

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  12. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  13. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  14. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  15. Documentation for the machine-readable character coded version of the SKYMAP catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.

  16. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  17. No cataclysmic variables missing: higher merger rate brings into agreement observed and predicted space densities

    NASA Astrophysics Data System (ADS)

    Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe

    2018-06-01

    The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.

  18. Digital Controller For Emergency Beacon

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    1990-01-01

    Prototype digital controller intended for use in 406-MHz emergency beacon. Undergoing development according to international specifications, 406-MHz emergency beacon system includes satellites providing worldwide monitoring of beacons, with Doppler tracking to locate each beacon within 5 km. Controller turns beacon on and off and generates binary codes identifying source (e.g., ship, aircraft, person, or vehicle on land). Codes transmitted by phase modulation. Knowing code, monitor attempts to communicate with user, monitor uses code information to dispatch rescue team appropriate to type and locations of carrier.

  19. Evidence for a planetary mass third body orbiting the binary star KIC 5095269

    NASA Astrophysics Data System (ADS)

    Getley, A. K.; Carter, B.; King, R.; O'Toole, S.

    2017-07-01

    In this paper, we report the evidence for a planetary mass body orbiting the close binary star KIC 5095269. This detection arose from a search for eclipse timing variations amongst the more than 2000 eclipsing binaries observed by Kepler. Light curve and periodic eclipse time variations have been analysed using systemic and a custom Binary Eclipse Timings code based on the Transit Analysis Package which indicates a 7.70 ± 0.08MJup object orbiting every 237.7 ± 0.1 d around a 1.2 M⊙ primary and a 0.51 M⊙ secondary in an 18.6 d orbit. A dynamical integration over 107 yr suggests a stable orbital configuration. Radial velocity observations are recommended to confirm the properties of the binary star components and the planetary mass of the companion.

  20. Comments Regarding the Binary Power Law for Heterogeneity of Disease Incidence

    USDA-ARS?s Scientific Manuscript database

    The binary power law (BPL) has been successfully used to characterize heterogeneity (over dispersion or small-scale aggregation) of disease incidence for many plant pathosystems. With the BPL, the log of the observed variance is a linear function of the log of the theoretical variance for a binomial...

  1. Predictor Combination in Binary Decision-Making Situations

    ERIC Educational Resources Information Center

    McGrath, Robert E.

    2008-01-01

    Professional psychologists are often confronted with the task of making binary decisions about individuals, such as predictions about future behavior or employee selection. Test users familiar with linear models and Bayes's theorem are likely to assume that the accuracy of decisions is consistently improved by combination of outcomes across valid…

  2. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  3. On the decoding process in ternary error-correcting output codes.

    PubMed

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  4. Planet formation: is it good or bad to have a stellar companion?

    NASA Astrophysics Data System (ADS)

    Marzari, F.; Thebault, P.; Scholl, H.

    2010-04-01

    Planet formation in binary star systems is a complex issue due to the gravitational perturbations of the companion star. One of the crucial steps of the core-accretion model is planetesimal accretion into large protoplanets which finally coalesce into planets. In a planetesimal swarm surrounding the primary star, the average mutual impact velocity determines if larger bodies form or if the population is grinded down to dust, halting the planet formation process. This velocity is strongly influenced by the companion gravitational pull and by gas drag. The combined effect of these two forces may act in favour of or against planet formation, setting a lower or equal probability of the existence of extrasolar planets around single or binary stars. Planetesimal accretion in binaries has been studied so far with two different approaches. N-body codes based on the assumption that the disk is axisymmetric are very cost-effective since they allow the study of the mutual relative velocity with limited CPU usage. A large amount of planetesimal trajectories can be computed making it possible to outline the regions around the star where planet formation is possible. The main limitation of the N-body codes is the axisymmetric assumption. The companion perturbations affect not only the planetesimal orbits, but also the gaseous disk, by forcing spiral density waves. In addition, the overall shape of the disk changes from circular to elliptic. Hybrid codes have been recently developed which solve the equations for the disk with a hydrodynamical grid code and use the computed gas density and velocity vector to calculate an accurate value of the gas drag force on the planetesimals. These codes are more complex and may compute the trajectories of only a limited number of planetesimals.

  5. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-04-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  6. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-07-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  7. Synthetic Survey of the Kepler Field

    NASA Astrophysics Data System (ADS)

    Wells, Mark; Prša, Andrej

    2018-01-01

    In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.

  8. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Common Envelope Light Curves. I. Grid-code Module Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.

    The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8  M {sub ⊙} red giant branch star interacts with a 0.6  M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less

  10. Deep classification hashing for person re-identification

    NASA Astrophysics Data System (ADS)

    Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang

    2018-04-01

    As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.

  11. Getting Started in Classroom Computing.

    ERIC Educational Resources Information Center

    Ahl, David H.

    Written for secondary students, this booklet provides an introduction to several computer-related concepts through a set of six classroom games, most of which can be played with little more than a sheet of paper and a pencil. The games are: 1) SECRET CODES--introduction to binary coding, punched cards, and paper tape; 2) GUESS--efficient methods…

  12. 2008 Post-Election Voting Survey of Federal Civilians Overseas: Administration, Datasets and Codebook

    DTIC Science & Technology

    2009-09-01

    184 LEGALRESR Recode-Tab:[8] State of legal voting res 72 LITHO * Litho code 473 NOFVAPA* 42a. [42a] Not used FVAP tele:Did not know 363...471 INRECNO Master SCS ID number 472 LITHO Litho code 473 QCOMPF Binary variable indicating if case compl 474 QCOMPN [QCOMPN] Questions

  13. Factors Affecting Code Status in a University Hospital Intensive Care Unit

    ERIC Educational Resources Information Center

    Van Scoy, Lauren Jodi; Sherman, Michael

    2013-01-01

    The authors collected data on diagnosis, hospital course, and end-of-life preparedness in patients who died in the intensive care unit (ICU) with "full code" status (defined as receiving cardiopulmonary resuscitation), compared with those who didn't. Differences were analyzed using binary and stepwise logistic regression. They found no…

  14. Alternancia entre el estado de emisión de Rayos-X y Pulsar en Sistemas Binarios Interactuantes

    NASA Astrophysics Data System (ADS)

    De Vito, M. A.; Benvenuto, O. G.; Horvath, J. E.

    2015-08-01

    Redbacks belong to the family of binary systems in which one of the components is a pulsar. Recent observations show redbacks that have switched their state from pulsar - low mass companion (where the accretion of material over the pulsar has ceased) to low mass X-ray binary system (where emission is produced by the mass accretion on the pulsar), or inversely. The irradiation effect included in our models leads to cyclic mass transfer episodes, which allow close binary systems to switch between one state to other. We apply our results to the case of PSR J1723-2837, and discuss the need to include new ingredients in our code of binary evolution to describe the observed state transitions.

  15. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  16. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  17. Multifunction audio digitizer. [producing direct delta and pulse code modulation

    NASA Technical Reports Server (NTRS)

    Monford, L. G., Jr. (Inventor)

    1974-01-01

    An illustrative embodiment of the invention includes apparatus which simultaneously produces both direct delta modulation and pulse code modulation. An input signal, after amplification, is supplied to a window comparator which supplies a polarity control signal to gate the output of a clock to the appropriate input of a binary up-down counter. The control signals provide direct delta modulation while the up-down counter output provides pulse code modulation.

  18. Fast and reliable symplectic integration for planetary system N-body problems

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.

    2016-06-01

    We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.

  19. Precision of proportion estimation with binary compressed Raman spectrum.

    PubMed

    Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric

    2018-01-01

    The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

  20. Negative base encoding in optical linear algebra processors

    NASA Technical Reports Server (NTRS)

    Perlee, C.; Casasent, D.

    1986-01-01

    In the digital multiplication by analog convolution algorithm, the bits of two encoded numbers are convolved to form the product of the two numbers in mixed binary representation; this output can be easily converted to binary. Attention is presently given to negative base encoding, treating base -2 initially, and then showing that the negative base system can be readily extended to any radix. In general, negative base encoding in optical linear algebra processors represents a more efficient technique than either sign magnitude or 2's complement encoding, when the additions of digitally encoded products are performed in parallel.

  1. Artificial equilibrium points in binary asteroid systems with continuous low-thrust

    NASA Astrophysics Data System (ADS)

    Bu, Shichao; Li, Shuang; Yang, Hongwei

    2017-08-01

    The positions and dynamical characteristics of artificial equilibrium points (AEPs) in the vicinity of a binary asteroid with continuous low-thrust are studied. The restricted ellipsoid-ellipsoid model of binary system is employed for the binary asteroid system. The positions of AEPs are obtained by this model. It is found that the set of the point L1 or L2 forms a shape of an ellipsoid while the set of the point L3 forms a shape like a "banana". The effect of the continuous low-thrust on the feasible region of motion is analyzed by zero velocity curves. Because of using the low-thrust, the unreachable region can become reachable. The linearized equations of motion are derived for stability's analysis. Based on the characteristic equation of the linearized equations, the stability conditions are derived. The stable regions of AEPs are investigated by a parametric analysis. The effect of the mass ratio and ellipsoid parameters on stable region is also discussed. The results show that the influence of the mass ratio on the stable regions is more significant than the parameters of ellipsoid.

  2. Arithmetic operations in optical computations using a modified trinary number system.

    PubMed

    Datta, A K; Basuray, A; Mukhopadhyay, S

    1989-05-01

    A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.

  3. Modeling the binary circumstellar medium of Type IIb/L/n supernova progenitors

    NASA Astrophysics Data System (ADS)

    Kolb, Christopher; Blondin, John; Borkowski, Kazik; Reynolds, Stephen

    2018-01-01

    Circumstellar interaction in close binary systems can produce a highly asymmetric environment, particularly for systems with a mass outflow velocity comparable to the binary orbital speed. This asymmetric circumstellar medium (CSM) becomes visible after a supernova explosion, when SN radiation illuminates the gas and when SN ejecta collide with the CSM. We aim to better understand the development of this asymmetric CSM, particularly for binary systems containing a red supergiant progenitor, and to study its impact on supernova morphology. To achieve this, we model the asymmetric wind and subsequent supernova explosion in full 3D hydrodynamics using the shock-capturing hydro code VH-1 on a spherical yin-yang grid. Wind interaction is computed in a frame co-rotating with the binary system, and gas is accelerated using a radiation pressure-driven wind model where optical depth of the radiative force is dependent on azimuthally-averaged gas density. We present characterization of our asymmetric wind density distribution model by fitting a polar-to-equatorial density contrast function to free parameters such as binary separation distance, primary mass loss rate, and binary mass ratio.

  4. Black Hole Accretion Discs on a Moving Mesh

    NASA Astrophysics Data System (ADS)

    Ryan, Geoffrey

    2017-01-01

    We present multi-dimensional numerical simulations of black hole accretion disks relevant for the production of electromagnetic counterparts to gravitational wave sources. We perform these simulations with a new general relativistic version of the moving-mesh magnetohydrodynamics code DISCO which we will present. This open-source code, GR-DISCO uses an orbiting and shearing mesh which moves with the dominant flow velocity, greatly improving the numerical accuracy of the thermodynamic variables in supersonic flows while also reducing numerical viscosity and greatly increasing computational efficiency by allowing for a larger time step. We have used GR-DISCO to study black hole accretion discs subject to gravitational torques from a binary companion, relevant for both current and future supermassive binary black hole searches and also as a possible electromagnetic precursor mechanism for LIGO events. Binary torques in these discs excite spiral shockwaves which effectively transport angular momentum in the disc and propagate through the innermost stable orbit, leading to stress corresponding to an alpha-viscosity of 10-2. We also present three-dimensional GRMHD simulations of neutrino dominated accretion flows (NDAFs) occurring after a binary neutron star merger in order to elucidate the conditions for electromagnetic transient production accompanying these gravitational waves sources expected to be detected by LIGO in the near future.

  5. Multilevel Concatenated Block Modulation Codes for the Frequency Non-selective Rayleigh Fading Channel

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun

    1996-01-01

    This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.

  6. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    PubMed

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Close encounters of the third-body kind. [intruding bodies in binary star systems

    NASA Technical Reports Server (NTRS)

    Davies, M. B.; Benz, W.; Hills, J. G.

    1994-01-01

    We simulated encounters involving binaries of two eccentricities: e = 0 (i.e., circular binaries) and e = 0.5. In both cases the binary contained a point mass of 1.4 solar masses (i.e., a neutron star) and a 0.8 solar masses main-sequence star modeled as a polytrope. The semimajor axes of both binaries were set to 60 solar radii (0.28 AU). We considered intruders of three masses: 1.4 solar masses (a neutron star), 0.8 solar masses (a main-sequence star or a higher mass white dwarf), and 0.64 solar masses (a more typical mass white dwarf). Our strategy was to perform a large number (40,000) of encounters using a three-body code, then to rerun a small number of cases with a three-dimensional smoothed particle hydrodynamics (SPH) code to determine the importance of hydrodynamical effects. Using the results of the three-body runs, we computed the exchange across sections, sigma(sub ex). From the results of the SPH runs, we computed the cross sections for clean exchange, denoted by sigma(sub cx); the formation of a triple system, denoted by sigma(sub trp); and the formation of a merged binary with an object formed from the merger of two of the stars left in orbit around the third star, denoted by sigma(sub mb). For encounters between either binary and a 1.4 solar masses neutron star, sigma(sub cx) approx. 0.7 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 0.3 sigma(sub ex). For encounters between either binary and the 0.8 solar masses main-sequence star, sigma(sub cx) approx. 0.50 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.0 sigma(sub ex). If the main sequence star is replaced by a main-sequence star of the same mass, we have sigma(sub cx) approx. 0.5 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.6 sigma(sub ex). Although the exchange cross section is a sensitive function of intruder mass, we see that the cross section to produce merged binaries is roughly independent of intruder mass. The merged binaries produced have semi-major axes much larger than either those of the original binaries or those of binaries produced in clean exchanges. Coupled with their lower kick velocities, received from the encounters, their larger size will enhance their cross section, shortening the waiting time to a subsequent encounter with another single star.

  8. A note about high blood pressure in childhood

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena; Simão, Carla

    2017-06-01

    In medical, behavioral and social sciences it is usual to get a binary outcome. In the present work is collected information where some of the outcomes are binary variables (1='yes'/ 0='no'). In [14] a preliminary study about the caregivers perception of pediatric hypertension was introduced. An experimental questionnaire was designed to be answered by the caregivers of routine pediatric consultation attendees in the Santa Maria's hospital (HSM). The collected data was statistically analyzed, where a descriptive analysis and a predictive model were performed. Significant relations between some socio-demographic variables and the assessed knowledge were obtained. In [14] can be found a statistical data analysis using partial questionnaire's information. The present article completes the statistical approach estimating a model for relevant remaining questions of questionnaire by Generalized Linear Models (GLM). Exploring the binary outcome issue, we intend to extend this approach using Generalized Linear Mixed Models (GLMM), but the process is still ongoing.

  9. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less

  10. CGRO Guest Investigator Program

    NASA Technical Reports Server (NTRS)

    Begelman, Mitchell C.

    1997-01-01

    The following are highlights from the research supported by this grant: (1) Theory of gamma-ray blazars: We studied the theory of gamma-ray blazars, being among the first investigators to propose that the GeV emission arises from Comptonization of diffuse radiation surrounding the jet, rather than from the synchrotron-self-Compton mechanism. In related work, we uncovered possible connections between the mechanisms of gamma-ray blazars and those of intraday radio variability, and have conducted a general study of the role of Compton radiation drag on the dynamics of relativistic jets. (2) A Nonlinear Monte Carlo code for gamma-ray spectrum formation: We developed, tested, and applied the first Nonlinear Monte Carlo (NLMC) code for simulating gamma-ray production and transfer under much more general (and realistic) conditions than are accessible with other techniques. The present version of the code is designed to simulate conditions thought to be present in active galactic nuclei and certain types of X-ray binaries, and includes the physics needed to model thermal and nonthermal electron-positron pair cascades. Unlike traditional Monte-Carlo techniques, our method can accurately handle highly non-linear systems in which the radiation and particle backgrounds must be determined self-consistently and in which the particle energies span many orders of magnitude. Unlike models based on kinetic equations, our code can handle arbitrary source geometries and relativistic kinematic effects In its first important application following testing, we showed that popular semi-analytic accretion disk corona models for Seyfert spectra are seriously in error, and demonstrated how the spectra can be simulated if the disk is sparsely covered by localized 'flares'.

  11. Bondi-Hoyle-Lyttleton Accretion onto Binaries

    NASA Astrophysics Data System (ADS)

    Antoni, Andrea; MacLeod, Morgan; Ramírez-Ruiz, Enrico

    2018-01-01

    Binary stars are not rare. While only close binary stars will eventually interact with one another, even the widest binary systems interact with their gaseous surroundings. The rates of accretion and the gaseous drag forces arising in these interactions are the key to understanding how these systems evolve. This poster examines accretion flows around a binary system moving supersonically through a background gas. We perform three-dimensional hydrodynamic simulations of Bondi-Hoyle-Lyttleton accretion using the adaptive mesh refinement code FLASH. We simulate a range of values of semi-major axis of the orbit relative to the gravitational focusing impact parameter of the pair. On large scales, gas is gravitationally focused by the center-of-mass of the binary, leading to dynamical friction drag and to the accretion of mass and momentum. On smaller scales, the orbital motion imprints itself on the gas. Notably, the magnitude and direction of the forces acting on the binary inherit this orbital dependence. The long-term evolution of the binary is determined by the timescales for accretion, slow down of the center-of-mass, and decay of the orbit. We use our simulations to measure these timescales and to establish a hierarchy between them. In general, our simulations indicate that binaries moving through gaseous media will slow down before the orbit decays.

  12. COSMIC probes into compact binary formation and evolution

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn

    2018-01-01

    The population of compact binaries in the galaxy represents the final state of all binaries that have lived up to the present epoch. Compact binaries present a unique opportunity to probe binary evolution since many of the interactions binaries experience can be imprinted on the compact binary population. By combining binary evolution simulations with catalogs of observable compact binary systems, we can distill the dominant physical processes that govern binary star evolution, as well as predict the abundance and variety of their end products.The next decades herald a previously unseen opportunity to study compact binaries. Multi-messenger observations from telescopes across all wavelengths and gravitational-wave observatories spanning several decades of frequency will give an unprecedented view into the structure of these systems and the composition of their components. Observations will not always be coincident and in some cases may be separated by several years, providing an avenue for simulations to better constrain binary evolution models in preparation for future observations.I will present the results of three population synthesis studies of compact binary populations carried out with the Compact Object Synthesis and Monte Carlo Investigation Code (COSMIC). I will first show how binary-black-hole formation channels can be understood with LISA observations. I will then show how the population of double white dwarfs observed with LISA and Gaia could provide a detailed view of mass transfer and accretion. Finally, I will show that Gaia could discover thousands black holes in the Milky Way through astrometric observations, yielding view into black-hole astrophysics that is complementary to and independent from both X-ray and gravitational-wave astronomy.

  13. Headspace quantification of pure and aqueous solutions of binary mixtures of key volatile organic compounds in Swiss cheeses using selected ion flow tube mass spectrometry.

    PubMed

    Castada, Hardy Z; Wick, Cheryl; Harper, W James; Barringer, Sheryl

    2015-01-15

    Twelve volatile organic compounds (VOCs) have recently been identified as key compounds in Swiss cheese with split defects. It is important to know how these VOCs interact in binary mixtures and if their behavior changes with concentration in binary mixtures. Selected ion flow tube mass spectrometry (SIFT-MS) was used for the headspace analysis of VOCs commonly found in Swiss cheeses. Headspace (H/S) sampling and quantification checks using SIFT-MS and further linear regression analyses were carried out on twelve selected aqueous solutions of VOCs. Five binary mixtures of standard solutions of VOCs were also prepared and the H/S profile of each mixture was analyzed. A very good fit of linearity for the twelve VOCs (95% confidence level) confirms direct proportionality between the H/S and the aqueous concentration of the standard solutions. Henry's Law coefficients were calculated with a high degree of confidence. SIFT-MS analysis of five binary mixtures showed that the more polar compounds reduced the H/S concentration of the less polar compounds, while the addition of a less polar compound increased the H/S concentration of the more polar compound. In the binary experiment, it was shown that the behavior of a compound in the headspace can be significantly affected by the presence of another compound. Thus, the matrix effect plays a significant role in the behavior of molecules in a mixed solution. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Probing the Milky Way electron density using multi-messenger astronomy

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane

    2015-04-01

    Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.

  15. Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal

    DTIC Science & Technology

    2004-03-01

    Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half

  16. FBC: a flat binary code scheme for fast Manhattan hash retrieval

    NASA Astrophysics Data System (ADS)

    Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun

    2018-04-01

    Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.

  17. MS/MS Digital Readout: Analysis of Binary Information Encoded in the Monomer Sequences of Poly(triazole amide)s.

    PubMed

    Amalian, Jean-Arthur; Trinh, Thanh Tam; Lutz, Jean-François; Charles, Laurence

    2016-04-05

    Tandem mass spectrometry was evaluated as a reliable sequencing methodology to read codes encrypted in monodisperse sequence-coded oligo(triazole amide)s. The studied oligomers were composed of monomers containing a triazole ring, a short ethylene oxide segment, and an amide group as well as a short alkyl chain (propyl or isobutyl) which defined the 0/1 molecular binary code. Using electrospray ionization, oligo(triazole amide)s were best ionized as protonated molecules and were observed to adopt a single charge state, suggesting that adducted protons were located on every other monomer unit. Upon collisional activation, cleavages of the amide bond and of one ether bond were observed to proceed in each monomer, yielding two sets of complementary product ions. Distribution of protons over the precursor structure was found to remain unchanged upon activation, allowing charge state to be anticipated for product ions in the four series and hence facilitating their assignment for a straightforward characterization of any encoded oligo(triazole amide)s.

  18. THE PHOTOMETRIC STUDY OF A NEGLECTED NEAR CONTACT BINARY: BS VULPECULAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, L.-Y.; Qian, S.-B.; Zejda, M.

    2012-08-15

    We present a detailed study of the close eclipsing binary BS Vulpeculae. Although it is relatively bright (V: 10.9-11.6 mag) and belongs to short-periodic variable stars (P = 0.48 days), it is rather neglected. To perform a thorough period analysis, we collected all available photometric observations that span the time interval of 1898-2010. Observations include archive photographic plate measurements and visually determined eclipse minima timings done in 1979-2003, which were later shown to be biased to accommodate the existing linear ephemeris. Applying our own direct period analysis we found a well-defined shortening of the orbital period of dP/dt = -6.70(17)more » Multiplication-Sign 10{sup -11} = -2.11(6) ms yr{sup -1}, which implies a continual mass flow from the primary to the secondary component. Using the 2003 version of the Wilson-Van Hamme code, our new complete BV(IR){sub C} light curves were analyzed and the physical parameters of the system were derived. We found that BS Vul is a near contact binary system with the primary component filling its critical Roche lobe. The luminosity enhancement on the left shoulder of the secondary minimum shown in the light curves can be explained as a result of a persistent hot spot on the secondary due to the mass transfer from the primary component to the secondary one and heating the facing hemisphere of the secondary component, which is consistent with our result of period analysis. With the period decrease, BS Vul will evolve toward the contact phase. It is another good observational example as predicted by the theory of thermal relaxation oscillations.« less

  19. Investigation on the Capability of a Non Linear CFD Code to Simulate Wave Propagation

    DTIC Science & Technology

    2003-02-01

    Linear CFD Code to Simulate Wave Propagation Pedro de la Calzada Pablo Quintana Manuel Antonio Burgos ITP, S.A. Parque Empresarial Fernando avenida...mechanisms above presented, simulation of unsteady aerodynamics with linear and nonlinear CFD codes is an ongoing activity within the turbomachinery industry

  20. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  1. I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.

    PubMed

    Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S

    2017-12-01

    The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Orbital motion in pre-main sequence binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, G. H.; Prato, L.; Simon, M.

    2014-06-01

    We present results from our ongoing program to map the visual orbits of pre-main sequence (PMS) binaries in the Taurus star forming region using adaptive optics imaging at the Keck Observatory. We combine our results with measurements reported in the literature to analyze the orbital motion for each binary. We present preliminary orbits for DF Tau, T Tau S, ZZ Tau, and the Pleiades binary HBC 351. Seven additional binaries show curvature in their relative motion. Currently, we can place lower limits on the orbital periods for these systems; full solutions will be possible with more orbital coverage. Five othermore » binaries show motion that is indistinguishable from linear motion. We suspect that these systems are bound and might show curvature with additional measurements in the future. The observations reported herein lay critical groundwork toward the goal of measuring precise masses for low-mass PMS stars.« less

  3. δ Scuti-type pulsation in the hot component of the Algol-type binary system BG Peg

    NASA Astrophysics Data System (ADS)

    Şenyüz, T.; Soydugan, E.

    2014-02-01

    In this study, 23 Algol-type binary systems, which were selected as candidate binaries with pulsating components, were observed at the Çanakkale Onsekiz Mart University Observatory. One of these systems was BG Peg. Its hotter component shows δ Scuti-type light variations. Physical parameters of BG Peg were derived from modelling the V light curve using the Wilson-Devinney code. The frequency analysis shows that the pulsational component of the BG Peg system pulsates in two modes with periods of 0.039 and 0.047 d. Mode identification indicates that both modes are most likely non-radial l = 2 modes.

  4. Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, M. J.

    2017-07-28

    Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less

  5. How do accretion discs break?

    NASA Astrophysics Data System (ADS)

    Dogan, Suzan

    2016-07-01

    Accretion discs are common in binary systems, and they are often found to be misaligned with respect to the binary orbit. The gravitational torque from a companion induces nodal precession in misaligned disc orbits. In this study, we first calculate whether this precession is strong enough to overcome the internal disc torques communicating angular momentum. We compare the disc precession torque with the disc viscous torque to determine whether the disc should warp or break. For typical parameters precession wins: the disc breaks into distinct planes that precess effectively independently. To check our analytical findings, we perform 3D hydrodynamical numerical simulations using the PHANTOM smoothed particle hydrodynamics code, and confirm that disc breaking is widespread and enhances accretion on to the central object. For some inclinations, the disc goes through strong Kozai cycles. Disc breaking promotes markedly enhanced and variable accretion and potentially produces high-energy particles or radiation through shocks. This would have significant implications for all binary systems: e.g. accretion outbursts in X-ray binaries and fuelling supermassive black hole (SMBH) binaries. The behaviour we have discussed in this work is relevant to a variety of astrophysical systems, for example X-ray binaries, where the disc plane may be tilted by radiation warping, SMBH binaries, where accretion of misaligned gas can create effectively random inclinations and protostellar binaries, where a disc may be misaligned by a variety of effects such as binary capture/exchange, accretion after binary formation.

  6. Variable-energy drift-tube linear accelerator

    DOEpatents

    Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.

    1984-01-01

    A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.

  7. Variable-energy drift-tube linear accelerator

    DOEpatents

    Swenson, D.A.; Boyd, T.J. Jr.; Potter, J.M.; Stovall, J.E.

    A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.

  8. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    PubMed

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  9. Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-08-01

    Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.

  10. A programmable metasurface with dynamic polarization, scattering and focusing control

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  11. A programmable metasurface with dynamic polarization, scattering and focusing control

    PubMed Central

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-01-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997

  12. A programmable metasurface with dynamic polarization, scattering and focusing control.

    PubMed

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-24

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  13. Theory of epigenetic coding.

    PubMed

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  14. Performance Analysis of IEEE 802.11g Waveform Transmitted Over a Fading Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2006-06-01

    called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus

  15. Simultaneous CCD Photometry of Two Eclipsing Binary Stars in Pegasus - Part2: BX Peg

    NASA Astrophysics Data System (ADS)

    Alton, K. B.

    2013-05-01

    BX Peg is an overcontact W UMa binary system (P = 0.280416 d) which has been rather well studied, but not fully understood due to complex changes in eclipse timings and light curve variations attributed to star spots. Photometric data collected in three bandpasses (B, V, and Ic) produced nineteen new times of minimum for BX Peg. These were used to update the linear ephemeris and further analyze potential changes in orbital periodicity by examining long-term changes in eclipse timings. In addition, synthetic fitting of light curves by Roche modeling was accomplished with the assistance of three different programs, two of which employ the Wilson-Devinney code. Different spotted solutions were necessary to achieve the best Roche model fits for BX Peg light curves collected in 2008 and 2011. Overall, the long-;term decrease (9.66 × 10-3 sec y-1) in orbital period defined by the parabolic fit of eclipse timing data could arise from mass transfer or angular momentum loss. The remaining residuals from observed minus predicted eclipse timings for BX Peg exhibit complex but non-random behavior. These may be related to magnetic activity cycles and/or the presence of an unseen mass influencing the times of minimum, however, additional minima need to be collected over a much longer timescale to resolve the nature of these complex changes.

  16. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  17. The fourfold way of the genetic code.

    PubMed

    Jiménez-Montaño, Miguel Angel

    2009-11-01

    We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.

  18. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  19. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  20. Design of pseudorandom binary sequence generator using lithium-niobate-based Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Choudhary, Kuldeep; Kumar, Santosh

    2017-05-01

    The application of electro-optic effect in lithium-niobate-based Mach-Zehnder interferometer to design a 3-bit optical pseudorandom binary sequence (PRBS) generator has been proposed, which is characterized by its simplicity of generation and stability. The proposed device is optoelectronic in nature. The PBRS generator is immensely applicable for pattern generation, encryption, and coding applications in optical networks. The study is carried out by simulating the proposed device with beam propagation method.

  1. The progenitors of supernovae Type Ia

    NASA Astrophysics Data System (ADS)

    Toonen, Silvia

    2014-09-01

    Despite the significance of Type Ia supernovae (SNeIa) in many fields in astrophysics, SNeIa lack a theoretical explanation. SNeIa are generally thought to be thermonuclear explosions of carbon/oxygen (CO) white dwarfs (WDs). The canonical scenarios involve white dwarfs reaching the Chandrasekhar mass, either by accretion from a non-degenerate companion (single-degenerate channel, SD) or by a merger of two CO WDs (double-degenerate channel, DD). The study of SNeIa progenitors is a very active field of research for binary population synthesis (BPS) studies. The strength of the BPS approach is to study the effect of uncertainties in binary evolution on the macroscopic properties of a binary population, in order to constrain binary evolutionary processes. I will discuss the expected SNeIa rate from the BPS approach and the uncertainties in their progenitor evolution, and compare with current observations. I will also discuss the results of the POPCORN project in which four BPS codes were compared to better understand the differences in the predicted SNeIa rate of the SD channel. The goal of this project is to investigate whether differences in the simulated populations are due to numerical effects or whether they can be explained by differences in the input physics. I will show which assumptions in BPS codes affect the results most and hence should be studied in more detail.

  2. On hydrodynamic phase field models for binary fluid mixtures

    NASA Astrophysics Data System (ADS)

    Yang, Xiaogang; Gong, Yuezheng; Li, Jun; Zhao, Jia; Wang, Qi

    2018-05-01

    Two classes of thermodynamically consistent hydrodynamic phase field models have been developed for binary fluid mixtures of incompressible viscous fluids of possibly different densities and viscosities. One is quasi-incompressible, while the other is incompressible. For the same binary fluid mixture of two incompressible viscous fluid components, which one is more appropriate? To answer this question, we conduct a comparative study in this paper. First, we visit their derivation, conservation and energy dissipation properties and show that the quasi-incompressible model conserves both mass and linear momentum, while the incompressible one does not. We then show that the quasi-incompressible model is sensitive to the density deviation of the fluid components, while the incompressible model is not in a linear stability analysis. Second, we conduct a numerical investigation on coarsening or coalescent dynamics of protuberances using the two models. We find that they can predict quite different transient dynamics depending on the initial conditions and the density difference although they predict essentially the same quasi-steady results in some cases. This study thus cast a doubt on the applicability of the incompressible model to describe dynamics of binary mixtures of two incompressible viscous fluids especially when the two fluid components have a large density deviation.

  3. A digitally controlled AGC loop circuitry for GNSS receiver chip with a binary weighted accurate dB-linear PGA

    NASA Astrophysics Data System (ADS)

    Gang, Jin; Yiqi, Zhuang; Yue, Yin; Miao, Cui

    2015-03-01

    A novel digitally controlled automatic gain control (AGC) loop circuitry for the global navigation satellite system (GNSS) receiver chip is presented. The entire AGC loop contains a programmable gain amplifier (PGA), an AGC circuit and an analog-to-digital converter (ADC), which is implemented in a 0.18 μm complementary metal-oxide-semiconductor (CMOS) process and measured. A binary-weighted approach is proposed in the PGA to achieve wide dB-linear gain control with small gain error. With binary-weighted cascaded amplifiers for coarse gain control, and parallel binary-weighted trans-conductance amplifier array for fine gain control, the PGA can provide a 64 dB dynamic range from -4 to 60 dB in 1.14 dB gain steps with a less than 0.15 dB gain error. Based on the Gaussian noise statistic characteristic of the GNSS signal, a digital AGC circuit is also proposed with low area and fast settling. The feed-backward AGC loop occupies an area of 0.27 mm2 and settles within less than 165 μs while consuming an average current of 1.92 mA at 1.8 V.

  4. The surface-induced spatial-temporal structures in confined binary alloys

    NASA Astrophysics Data System (ADS)

    Krasnyuk, Igor B.; Taranets, Roman M.; Chugunova, Marina

    2014-12-01

    This paper examines surface-induced ordering in confined binary alloys. The hyperbolic initial boundary value problem (IBVP) is used to describe a scenario of spatiotemporal ordering in a disordered phase for concentration of one component of binary alloy and order parameter with non-linear dynamic boundary conditions. This hyperbolic model consists of two coupled second order differential equations for order parameter and concentration. It also takes into account effects of the “memory” on the ordering of atoms and their densities in the alloy. The boundary conditions characterize surface velocities of order parameter and concentration changing which is due to surface (super)cooling on walls confining the binary alloy. It is shown that for large times there are three classes of dynamic non-linear boundary conditions which lead to three different types of attractor’s elements for the IBVP. Namely, the elements of attractor are the limit periodic simple shock waves with fronts of “discontinuities” Γ. If Γ is finite, then the attractor contains spatiotemporal functions of relaxation type. If Γ is infinite and countable then we observe the functions of pre-turbulent type. If Γ is infinite and uncountable then we obtain the functions of turbulent type.

  5. Portmanteau Constructions, Phrase Structure, and Linearization.

    PubMed

    Chan, Brian Hok-Shing

    2015-01-01

    In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e., head-initial vs. head-final), a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called "portmanteau" sentences (Nishimura, 1985, 1986; Sankoff et al., 1990, etc.) have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012). Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005) as well as the Antisymmetry hypothesis (ibid.). Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980), the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc.), and the Bilingual Speech Model (Muysken, 2000, 2013), the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM) interface (Chomsky, 2005, 2013), this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e., head-initial word order) or the reverse (i.e., head-final word order). As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau construction is as simple as an XP in which a head X merges with its complement YP and projects an XP (i.e., X YP → [XP X YP]).

  6. Portmanteau Constructions, Phrase Structure, and Linearization

    PubMed Central

    Chan, Brian Hok-Shing

    2015-01-01

    In bilingual code-switching which involves language-pairs with contrasting head-complement orders (i.e., head-initial vs. head-final), a head may be lexicalized from both languages with its complement sandwiched in the middle. These so-called “portmanteau” sentences (Nishimura, 1985, 1986; Sankoff et al., 1990, etc.) have been attested for decades, but they had never received a systematic, formal analysis in terms of current syntactic theory before a few recent attempts (Hicks, 2010, 2012). Notwithstanding this lack of attention, these structures are in fact highly relevant to theories of linearization and phrase structure. More specifically, they challenge binary-branching (Kayne, 1994, 2004, 2005) as well as the Antisymmetry hypothesis (ibid.). Not explained by current grammatical models of code-switching, including the Equivalence Constraint (Poplack, 1980), the Matrix Language Frame Model (Myers-Scotton, 1993, 2002, etc.), and the Bilingual Speech Model (Muysken, 2000, 2013), the portmanteau construction indeed looks uncommon or abnormal, defying any systematic account. However, the recurrence of these structures in various datasets and constraints on them do call for an explanation. This paper suggests an account which lies with syntax and also with the psycholinguistics of bilingualism. Assuming that linearization is a process at the Sensori-Motor (SM) interface (Chomsky, 2005, 2013), this paper sees that word order is not fixed in a syntactic tree but it is set in the production process, and much information of word order rests in the processor, for instance, outputting a head before its complement (i.e., head-initial word order) or the reverse (i.e., head-final word order). As for the portmanteau construction, it is the output of bilingual speakers co-activating two sets of head-complement orders which summon the phonetic forms of the same word in both languages. Under this proposal, the underlying structure of a portmanteau construction is as simple as an XP in which a head X merges with its complement YP and projects an XP (i.e., X YP → [XP X YP]). PMID:26733894

  7. Period variation studies of six contact binaries in M4

    NASA Astrophysics Data System (ADS)

    Rukmini, Jagirdar; Shanti Priya, Devarapalli

    2018-04-01

    We present the first period study of six contact binaries in the closest globular cluster M4 the data collected from June 1995‑June 2009 and Oct 2012‑Sept 2013. New times of minima are determined for all the six variables and eclipse timing (O-C) diagrams along with the quadratic fit are presented. For all the variables, the study of (O-C) variations reveals changes in the periods. In addition, the fundamental parameters for four of the contact binaries obtained using the Wilson-Devinney code (v2003) are presented. Planned observations of these binaries using the 3.6-m Devasthal Optical Telescope (DOT) and the 4-m International Liquid Mirror Telescope (ILMT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES; Nainital) can throw light on their evolutionary status from long term period variation studies.

  8. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-04-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.

  9. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-07-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.

  10. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  11. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  12. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  13. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  14. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  15. Visualising interacting binaries in 3D

    NASA Astrophysics Data System (ADS)

    Hynes, R. I.

    2002-01-01

    I have developed a code which allows images to be produced of a variety of interacting binaries for any system parameters. The resulting images are not only helpful in visualising the geometry of a given system but are also helpful in talks and educational work. I would like to acknowledge financial support from the Leverhulme Trust, and to thank Dan Rolfe for many discussions on how to represent interacting binaries and the users of BinSim who have provided valuable testing and feedback. BinSim would not have been possible without the efforts of Brian Paul and others responsible for the Mesa 3-D graphics library -- Mesa 3-D graphics .

  16. Monte Carlo study of four dimensional binary hard hypersphere mixtures

    NASA Astrophysics Data System (ADS)

    Bishop, Marvin; Whitlock, Paula A.

    2012-01-01

    A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.

  17. EXODUS II: A finite element data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  18. Practical applications of digital integrated circuits. Part 2: Minimization techniques, code conversion, flip-flops, and asynchronous circuits

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Here, the 7400 line of transistor to transistor logic (TTL) devices is emphasized almost exclusively where hardware is concerned. However, it should be pointed out that the logic theory contained herein applies to all hardware. Binary numbers, simplification of logic circuits, code conversion circuits, basic flip-flop theory, details about series 54/7400, and asynchronous circuits are discussed.

  19. The Linear Mixing Approximation for Planetary Ices

    NASA Astrophysics Data System (ADS)

    Bethkenhagen, M.; Meyer, E. R.; Hamel, S.; Nettelmann, N.; French, M.; Scheibe, L.; Ticknor, C.; Collins, L. A.; Kress, J. D.; Fortney, J. J.; Redmer, R.

    2017-12-01

    We investigate the validity of the widely used linear mixing approximation for the equations of state (EOS) of planetary ices, which are thought to dominate the interior of the ice giant planets Uranus and Neptune. For that purpose we perform density functional theory molecular dynamics simulations using the VASP code.[1] In particular, we compute 1:1 binary mixtures of water, ammonia, and methane, as well as their 2:1:4 ternary mixture at pressure-temperature conditions typical for the interior of Uranus and Neptune.[2,3] In addition, a new ab initio EOS for methane is presented. The linear mixing approximation is verified for the conditions present inside Uranus ranging up to 10 Mbar based on the comprehensive EOS data set. We also calculate the diffusion coefficients for the ternary mixture along different Uranus interior profiles and compare them to the values of the pure compounds. We find that deviations of the linear mixing approximation from the real mixture are generally small; for the EOS they fall within about 4% uncertainty while the diffusion coefficients deviate up to 20% . The EOS of planetary ices are applied to adiabatic models of Uranus. It turns out that a deep interior of almost pure ices is consistent with the gravity field data, in which case the planet becomes rather cold (T core ˜ 4000 K). [1] G. Kresse and J. Hafner, Physical Review B 47, 558 (1993). [2] R. Redmer, T.R. Mattsson, N. Nettelmann and M. French, Icarus 211, 798 (2011). [3] N. Nettelmann, K. Wang, J. J. Fortney, S. Hamel, S. Yellamilli, M. Bethkenhagen and R. Redmer, Icarus 275, 107 (2016).

  20. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  1. MBSSAS: A code for the computation of margules parameters and equilibrium relations in binary solid-solution aqueous-solution systems

    USGS Publications Warehouse

    Glynn, P.D.

    1991-01-01

    The computer code MBSSAS uses two-parameter Margules-type excess-free-energy of mixing equations to calculate thermodynamic equilibrium, pure-phase saturation, and stoichiometric saturation states in binary solid-solution aqueous-solution (SSAS) systems. Lippmann phase diagrams, Roozeboom diagrams, and distribution-coefficient diagrams can be constructed from the output data files, and also can be displayed by MBSSAS (on IBM-PC compatible computers). MBSSAS also will calculate accessory information, such as the location of miscibility gaps, spinodal gaps, critical-mixing points, alyotropic extrema, Henry's law solid-phase activity coefficients, and limiting distribution coefficients. Alternatively, MBSSAS can use such information (instead of the Margules, Guggenheim, or Thompson and Waldbaum excess-free-energy parameters) to calculate the appropriate excess-free-energy of mixing equation for any given SSAS system. ?? 1991.

  2. Extra Solar Planet Science With a Non Redundant Mask

    NASA Astrophysics Data System (ADS)

    Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi

    2017-01-01

    To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.

  3. Thermodynamic properties of gaseous fluorocarbons and isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon

    NASA Technical Reports Server (NTRS)

    Talcott, N. A., Jr.

    1977-01-01

    Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.

  4. A Chemical Alphabet for Macromolecular Communications.

    PubMed

    Giannoukos, Stamatios; McGuiness, Daniel Tunç; Marshall, Alan; Smith, Jeremy; Taylor, Stephen

    2018-06-08

    Molecular communications in macroscale environments is an emerging field of study driven by the intriguing prospect of sending coded information over olfactory networks. For the first time, this article reports two signal modulation techniques (on-off keying-OOK, and concentration shift keying-CSK) which have been used to encode and transmit digital information using odors over distances of 1-4 m. Molecular transmission of digital data was experimentally investigated for the letter "r" with a binary value of 01110010 (ASCII) for a gas stream network channel (up to 4 m) using mass spectrometry (MS) as the main detection-decoding system. The generation and modulation of the chemical signals was achieved using an automated odor emitter (OE) which is based on the controlled evaporation of a chemical analyte and its diffusion into a carrier gas stream. The chemical signals produced propagate within a confined channel to reach the demodulator-MS. Experiments were undertaken for a range of volatile organic compounds (VOCs) with different diffusion coefficient values in air at ambient conditions. Representative compounds investigated include acetone, cyclopentane, and n-hexane. For the first time, the binary code ASCII (American Standard Code for Information Interchange) is combined with chemical signaling to generate a molecular representation of the English alphabet. Transmission experiments of fixed-width molecular signals corresponding to letters of the alphabet over varying distances are shown. A binary message corresponding to the word "ion" was synthesized using chemical signals and transmitted within a physical channel over a distance of 2 m.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sánchez-Salcedo, F. J.; Chametla, Raul O., E-mail: jsanchez@astro.unam.mx

    Using time-dependent linear theory, we investigate the morphology of the gravitational wake induced by a binary, whose center of mass moves at velocity V{sub cm} against a uniform background of gas. For simplicity, we assume that the components of the binary are on circular orbits about their common center of mass. The consequences of dynamical friction is twofold. First, gas dynamical friction may drag the center of mass of the binary and cause the binary to migrate. Second, drag forces also induce a braking torque, which causes the orbits of the components of the binary to shrink. We compute themore » drag forces acting on one component of the binary due to the gravitational interaction with its own wake. We show that the dynamical friction force responsible for decelerating the center of mass of the binary is smaller than it is in the point-mass case because of the loss of gravitational focusing. We show that the braking internal torque depends on the Mach numbers of each binary component about their center of mass, and also on the Mach number of the center of mass of the binary. In general, the internal torque decreases with increasing the velocity of the binary relative to the ambient gas cloud. However, this is not always the case. We also mention the relevance of our results to the period distribution of binaries.« less

  6. The modelling of heat, mass and solute transport in solidification systems

    NASA Technical Reports Server (NTRS)

    Voller, V. R.; Brent, A. D.; Prakash, C.

    1989-01-01

    The aim of this paper is to explore the range of possible one-phase models of binary alloy solidification. Starting from a general two-phase description, based on the two-fluid model, three limiting cases are identified which result in one-phase models of binary systems. Each of these models can be readily implemented in standard single phase flow numerical codes. Differences between predictions from these models are examined. In particular, the effects of the models on the predicted macro-segregation patterns are evaluated.

  7. Binary CFG Rebuilt of Self-Modifying Codes

    DTIC Science & Technology

    2016-10-03

    ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY)      04-10-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 12 May 2014 to 11 May 2016 4. TITLE ...industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...virus software based on binary signatures. A popular method in industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we

  8. surrkick: Black-hole kicks from numerical-relativity surrogate models

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Hébert, François; Stein, Leo C.

    2018-04-01

    surrkick quickly and reliably extract recoils imparted to generic, precessing, black hole binaries. It uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters, and from this waveform directly integrates the gravitational-wave linear momentum flux. This entirely bypasses the need of fitting formulae which are typically used to model black-hole recoils in astrophysical contexts.

  9. Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1975-01-01

    It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.

  10. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    PubMed

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  11. LIGO GW150914 and GW151226 gravitational wave detection and generalized gravitation theory (MOG)

    NASA Astrophysics Data System (ADS)

    Moffat, J. W.

    2016-12-01

    The nature of gravitational waves in a generalized gravitation theory is investigated. The linearized field equations and the metric tensor quadrupole moment power and the decrease in radius of an inspiralling binary system of two compact objects are derived. The generalized Kerr metric describing a spinning black hole is determined by its mass M and the spin parameter a = cS / GM2. The LIGO-Virgo collaboration data is fitted with smaller binary black hole masses in agreement with the current electromagnetic, observed X-ray binary upper bound for a black hole mass, M ≲ 10M⊙.

  12. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  13. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  14. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  15. Linear Stability of Binary Alloy Solidification for Unsteady Growth Rates

    NASA Technical Reports Server (NTRS)

    Mazuruk, K.; Volz, M. P.

    2010-01-01

    An extension of the Mullins and Sekerka (MS) linear stability analysis to the unsteady growth rate case is considered for dilute binary alloys. In particular, the stability of the planar interface during the initial solidification transient is studied in detail numerically. The rapid solidification case, when the system is traversing through the unstable region defined by the MS criterion, has also been treated. It has been observed that the onset of instability is quite accurately defined by the "quasi-stationary MS criterion", when the growth rate and other process parameters are taken as constants at a particular time of the growth process. A singular behavior of the governing equations for the perturbed quantities at the constitutional supercooling demarcation line has been observed. However, when the solidification process, during its transient, crosses this demarcation line, a planar interface is stable according to the linear analysis performed.

  16. Wind-accelerated orbital evolution in binary systems with giant stars

    NASA Astrophysics Data System (ADS)

    Chen, Zhuo; Blackman, Eric G.; Nordhaus, Jason; Frank, Adam; Carroll-Nellenback, Jonathan

    2018-01-01

    Using 3D radiation-hydrodynamic simulations and analytic theory, we study the orbital evolution of asymptotic giant branch (AGB) binary systems for various initial orbital separations and mass ratios, and thus different initial accretion modes. The time evolution of binary separations and orbital periods are calculated directly from the averaged mass-loss rate, accretion rate and angular momentum loss rate. We separately consider spin-orbit synchronized and zero-spin AGB cases. We find that the angular momentum carried away by the mass loss together with the mass transfer can effectively shrink the orbit when accretion occurs via wind-Roche lobe overflow. In contrast, the larger fraction of mass lost in Bondi-Hoyle-Lyttleton accreting systems acts to enlarge the orbit. Synchronized binaries tend to experience stronger orbital period decay in close binaries. We also find that orbital period decay is faster when we account for the non-linear evolution of the accretion mode as the binary starts to tighten. This can increase the fraction of binaries that result in common envelope, luminous red novae, Type Ia supernovae and planetary nebulae with tight central binaries. The results also imply that planets in the habitable zone around white dwarfs are unlikely to be found.

  17. P-Code-Enhanced Encryption-Mode Processing of GPS Signals

    NASA Technical Reports Server (NTRS)

    Young, Lawrence; Meehan, Thomas; Thomas, Jess B.

    2003-01-01

    A method of processing signals in a Global Positioning System (GPS) receiver has been invented to enable the receiver to recover some of the information that is otherwise lost when GPS signals are encrypted at the transmitters. The need for this method arises because, at the option of the military, precision GPS code (P-code) is sometimes encrypted by a secret binary code, denoted the A code. Authorized users can recover the full signal with knowledge of the A-code. However, even in the absence of knowledge of the A-code, one can track the encrypted signal by use of an estimate of the A-code. The present invention is a method of making and using such an estimate. In comparison with prior such methods, this method makes it possible to recover more of the lost information and obtain greater accuracy.

  18. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  19. SPECKLE INTERFEROMETRY AT SOAR IN 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2015-08-15

    The results of speckle interferometric observations at the Southern Astrophysical Research Telescope (SOAR) telescope in 2014 are given. A total of 1641 observations were taken, yielding 1636 measurements of 1218 resolved binary and multiple stars and 577 non-resolutions of 441 targets. We resolved for the first time 56 pairs, including some nearby astrometric or spectroscopic binaries and ten new subsystems in previously known visual binaries. The calibration of the data is checked by linear fits to the positions of 41 wide binaries observed at SOAR over several seasons. The typical calibration accuracy is 0.°1 in angle and 0.3% in pixelmore » scale, while the measurement errors are on the order of 3 mas. The new data are used here to compute 194 binary star orbits, 148 of which are improvements on previous orbital solutions and 46 are first-time orbits.« less

  20. Asymmetric distances for binary embeddings.

    PubMed

    Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana

    2014-01-01

    In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.

  1. Eclipsing Binary V1178 Tau: A Reddening Independent Determination of the Age and Distance to NGC 1817

    NASA Astrophysics Data System (ADS)

    Hedlund, Anne; Sandquist, Eric L.; Arentoft, Torben; Brogaard, Karsten; Grundahl, Frank; Stello, Dennis; Bedin, Luigi R.; Libralato, Mattia; Malavolta, Luca; Nardiello, Domenico; Molenda-Zakowicz, Joanna; Vanderburg, Andrew

    2018-06-01

    V1178 Tau is a double-lined spectroscopic eclipsing binary in NGC1817, one of the more massive clusters observed in the K2 mission. We have determined the orbital period (P = 2.20 d) for the first time, and we model radial velocity measurements from the HARPS and ALFOSC spectrographs, light curves collected by Kepler, and ground based light curves using the Eclipsing Light Curve code (ELC, Orosz & Hauschildt 2000). We present masses and radii for the stars in the binary, allowing for a reddening-independent means of determining the cluster age. V1178 Tau is particularly useful for calculating the age of the cluster because the stars are close to the cluster turnoff, providing a more precise age determination. Furthermore, because one of the stars in the binary is a delta Scuti variable, the analysis provides improved insight into their pulsations.

  2. Topology of black hole binary-single interactions

    NASA Astrophysics Data System (ADS)

    Samsing, Johan; Ilan, Teva

    2018-05-01

    We present a study on how the outcomes of binary-single interactions involving three black holes (BHs) distribute as a function of the initial conditions; a distribution we refer to as the topology. Using a N-body code that includes BH finite sizes and gravitational wave (GW) emission in the equation of motion (EOM), we perform more than a million binary-single interactions to explore the topology of both the Newtonian limit and the limit at which general relativistic (GR) effects start to become important. From these interactions, we are able to describe exactly under which conditions BH collisions and eccentric GW capture mergers form, as well as how GR in general modifies the Newtonian topology. This study is performed on both large- and microtopological scales. We further describe how the inclusion of GW emission in the EOM naturally leads to scenarios where the binary-single system undergoes two successive GW mergers.

  3. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    PubMed

    Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  4. The Effects of Single and Close Binary Evolution on the Stellar Mass Function

    NASA Astrophysics Data System (ADS)

    Schneider, R. N. F.; Izzard, G. R.; de Mink, S.; Langer, N., Stolte, A., de Koter, A.; Gvaramadze, V. V.; Hussmann, B.; Liermann, A.; Sana, H.

    2013-06-01

    Massive stars are almost exclusively born in star clusters, where stars in a cluster are expected to be born quasi-simultaneously and with the same chemical composition. The distribution of their birth masses favors lower over higher stellar masses, such that the most massive stars are rare, and the existence of an stellar upper mass limit is still debated. The majority of massive stars are born as members of close binary systems and most of them will exchange mass with a close companion during their lifetime. We explore the influence of single and binary star evolution on the high mass end of the stellar mass function using a rapid binary evolution code. We apply our results to two massive Galactic star clusters and show how the shape of their mass functions can be used to determine cluster ages and comment on the stellar upper mass limit in view of our new findings.

  5. Examining the Error of Mis-Specifying Nonlinear Confounding Effect With Application on Accelerometer-Measured Physical Activity.

    PubMed

    Lee, Paul H

    2017-06-01

    Some confounders are nonlinearly associated with dependent variables, but they are often adjusted using a linear term. The purpose of this study was to examine the error of mis-specifying the nonlinear confounding effect. We carried out a simulation study to investigate the effect of adjusting for a nonlinear confounder in the estimation of a causal relationship between the exposure and outcome in 3 ways: using a linear term, binning into 5 equal-size categories, or using a restricted cubic spline of the confounder. Continuous, binary, and survival outcomes were simulated. We examined the confounder across varying measurement error. In addition, we performed a real data analysis examining the 3 strategies to handle the nonlinear effects of accelerometer-measured physical activity in the National Health and Nutrition Examination Survey 2003-2006 data. The mis-specification of a nonlinear confounder had little impact on causal effect estimation for continuous outcomes. For binary and survival outcomes, this mis-specification introduced bias, which could be eliminated using spline adjustment only when there is small measurement error of the confounder. Real data analysis showed that the associations between high blood pressure, high cholesterol, and diabetes and mortality adjusted for physical activity with restricted cubic spline were about 3% to 11% larger than their counterparts adjusted with a linear term. For continuous outcomes, confounders with nonlinear effects can be adjusting with a linear term. Spline adjustment should be used for binary and survival outcomes on confounders with small measurement error.

  6. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  7. Simulated Assessment of Interference Effects in Direct Sequence Spread Spectrum (DSSS) QPSK Receiver

    DTIC Science & Technology

    2014-03-27

    bit error rate BPSK binary phase shift keying CDMA code division multiple access CSI comb spectrum interference CW continuous wave DPSK differential... CDMA ) and GPS systems which is a Gold code. This code is generated by a modulo-2 operation between two different preferred m-sequences. The preferred m...10 SNR Sim (dB) S N R O ut ( dB ) SNR RF SNR DS Figure 3.26: Comparison of input S NRS im and S NROut of the band-pass RF filter (S NRRF) and

  8. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  9. Did ASAS-SN Kill the Supermassive Black Hole Binary Candidate PG1302-102?

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Gezari, Suvi; Miller, M. Coleman

    2018-05-01

    Graham et al. reported a periodically varying quasar and supermassive black hole binary candidate, PG1302-102 (hereafter PG1302), which was discovered in the Catalina Real-time Transient Survey (CRTS). Its combined Lincoln Near-Earth Asteroid Research (LINEAR) and CRTS optical light curve is well fitted to a sinusoid of an observed period of ≈1884 days and well modeled by the relativistic Doppler boosting of the secondary mini-disk. However, the LINEAR+CRTS light curve from MJD ≈52,700 to MJD ≈56,400 covers only ∼2 cycles of periodic variation, which is a short baseline that can be highly susceptible to normal, stochastic quasar variability. In this Letter, we present a reanalysis of PG1302 using the latest light curve from the All-sky Automated Survey for Supernovae (ASAS-SN), which extends the observational baseline to the present day (MJD ≈58,200), and adopting a maximum likelihood method that searches for a periodic component in addition to stochastic quasar variability. When the ASAS-SN data are combined with the previous LINEAR+CRTS data, the evidence for periodicity decreases. For genuine periodicity one would expect that additional data would strengthen the evidence, so the decrease in significance may be an indication that the binary model is disfavored.

  10. Orbits for 18 Visual Binaries and Two Double-line Spectroscopic Binaries Observed with HRCAM on the CTIO SOAR 4 m Telescope, Using a New Bayesian Orbit Code Based on Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.

    2017-11-01

    We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  11. Fundamental parameters of massive stars in multiple systems: The cases of HD 17505A and HD 206267A

    NASA Astrophysics Data System (ADS)

    Raucq, F.; Rauw, G.; Mahy, L.; Simón-Díaz, S.

    2018-06-01

    Context. Many massive stars are part of binary or higher multiplicity systems. The present work focusses on two higher multiplicity systems: HD 17505A and HD 206267A. Aims: Determining the fundamental parameters of the components of the inner binary of these systems is mandatory to quantify the impact of binary or triple interactions on their evolution. Methods: We analysed high-resolution optical spectra to determine new orbital solutions of the inner binary systems. After subtracting the spectrum of the tertiary component, a spectral disentangling code was applied to reconstruct the individual spectra of the primary and secondary. We then analysed these spectra with the non-LTE model atmosphere code CMFGEN to establish the stellar parameters and the CNO abundances of these stars. Results: The inner binaries of these systems have eccentric orbits with e 0.13 despite their relatively short orbital periods of 8.6 and 3.7 days for HD 17505Aa and HD 206267Aa, respectively. Slight modifications of the CNO abundances are found in both components of each system. The components of HD 17505Aa are both well inside their Roche lobe, whilst the primary of HD 206267Aa nearly fills its Roche lobe around periastron passage. Whilst the rotation of the primary of HD 206267Aa is in pseudo-synchronization with the orbital motion, the secondary displays a rotation rate that is higher. Conclusions: The CNO abundances and properties of HD 17505Aa can be explained by single star evolutionary models accounting for the effects of rotation, suggesting that this system has not yet experienced binary interaction. The properties of HD 206267Aa suggest that some intermittent binary interaction might have taken place during periastron passages, but is apparently not operating anymore. Based on observations collected with the TIGRE telescope (La Luz, Mexico), the 1.93 m telescope at Observatoire de Haute Provence (France), the Nordic Optical Telescope at the Observatorio del Roque de los Muchachos (La Palma, Spain), and the Canada-France-Hawaii telescope (Mauna Kea, Hawaii).

  12. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  13. Progressive video coding for noisy channels

    NASA Astrophysics Data System (ADS)

    Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.

    1998-10-01

    We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.

  14. Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV

    NASA Astrophysics Data System (ADS)

    Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.

    2005-06-01

    The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.

  15. ASTROP2-LE: A Mistuned Aeroelastic Analysis System Based on a Two Dimensional Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral

    2002-01-01

    An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.

  16. QCA Gray Code Converter Circuits Using LTEx Methodology

    NASA Astrophysics Data System (ADS)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-07-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  17. QCA Gray Code Converter Circuits Using LTEx Methodology

    NASA Astrophysics Data System (ADS)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-04-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  18. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  19. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  20. Photometric Solutions of Three Eclipsing Binary Stars Observed from Dome A, Antarctica

    NASA Astrophysics Data System (ADS)

    Liu, N.; Fu, J. N.; Zong, W.; Wang, L. Z.; Uddin, S. A.; Zhang, X. B.; Zhang, Y. P.; Cang, T. Q.; Li, G.; Yang, Y.; Yang, G. C.; Mould, J.; Morrell, N.

    2018-04-01

    Based on spectroscopic observations for the eclipsing binaries CSTAR 036162 and CSTAR 055495 with the WiFeS/2.3 m telescope at SSO and CSTAR 057775 with the Mage/Magellan I at LCO in 2017, stellar parameters are derived. More than 100 nights of almost-continuous light curves reduced from the time-series photometric observations by CSTAR at Dome A of Antarctic in i in 2008 and in g and r in 2009, respectively, are applied to find photometric solutions for the three binaries with the Wilson–Devinney code. The results show that CSTAR 036162 is a detached configuration with the mass ratio q = 0.354 ± 0.0009, while CSTAR 055495 is a semi-detached binary system with the unusual q = 0.946 ± 0.0006, which indicates that CSTAR 055495 may be a rare binary system with mass ratio close to one and the secondary component filling its Roche Lobe. This implies that a mass-ratio reversal has just occurred and CSTAR 055495 is in a rapid mass-transfer stage. Finally, CSTAR 057775 is believed to be an A-type W UMa binary with q = 0.301 ± 0.0008 and a fill-out factor of f = 0.742(8).

  1. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    NASA Astrophysics Data System (ADS)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  2. Thermodynamic study of complex formation between Ce3+ and cryptand 222 in some binary mixed nonaqueous solvents

    NASA Astrophysics Data System (ADS)

    Rounaghi, G. H.; Dolatshahi, S.; Tarahomi, S.

    2014-12-01

    The stoichiometry, stability and the thermodynamic parameters of complex formation between cerium(III) cation and cryptand 222 (4,7,13,16,21,24-hexaoxa-1,10-diazabycyclo[8.8.8]-hexacosane) were studied by conductometric titration method in some binary solvent mixtures of dimethylformamide (DMF), 1,2-dichloroethane (DCE), ethyl acetate (EtOAc) and methyl acetate (MeOAc) with methanol (MeOH), at 288, 298, 308, and 318 K. A model based on 1: 1 stoichiometry has been used to analyze the conductivity data. The data have been fitted according to a non-linear least-squares analysis that provide the stability constant, K f, for the cation-ligand inclusion complex. The results revealed that the stability order of [Ce(cryptand 222)]3+ complex changes with the nature and composition of the solvent system. A non-linear relationship was observed between the stability constant (log K f) of [Ce(cryptand 222)]3+ complex versus the composition of the binary mixed solvent. Standard thermodynamic values were obtained from temperature dependence of the stability constant of the complex, show that the studied complexation process is mainly entropy governed and are influenced by the nature and composition of the binary mixed solvent solutions.

  3. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    DOE PAGES

    Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...

    2015-01-01

    This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less

  4. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  5. Computer simulation of radiation damage in gallium arsenide

    NASA Technical Reports Server (NTRS)

    Stith, John J.; Davenport, James C.; Copeland, Randolph L.

    1989-01-01

    A version of the binary-collision simulation code MARLOWE was used to study the spatial characteristics of radiation damage in proton and electron irradiated gallium arsenide. Comparisons made with the experimental results proved to be encouraging.

  6. APPLICATION OF GAS DYNAMICAL FRICTION FOR PLANETESIMALS. II. EVOLUTION OF BINARY PLANETESIMALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grishin, Evgeni; Perets, Hagai B.

    2016-04-01

    One of the first stages of planet formation is the growth of small planetesimals and their accumulation into large planetesimals and planetary embryos. This early stage occurs long before the dispersal of most of the gas from the protoplanetary disk. At this stage gas–planetesimal interactions play a key role in the dynamical evolution of single intermediate-mass planetesimals (m{sub p} ∼ 10{sup 21}–10{sup 25} g) through gas dynamical friction (GDF). A significant fraction of all solar system planetesimals (asteroids and Kuiper-belt objects) are known to be binary planetesimals (BPs). Here, we explore the effects of GDF on the evolution of BPs embedded inmore » a gaseous disk using an N-body code with a fiducial external force accounting for GDF. We find that GDF can induce binary mergers on timescales shorter than the disk lifetime for masses above m{sub p} ≳ 10{sup 22} g at 1 au, independent of the binary initial separation and eccentricity. Such mergers can affect the structure of merger-formed planetesimals, and the GDF-induced binary inspiral can play a role in the evolution of the planetesimal disk. In addition, binaries on eccentric orbits around the star may evolve in the supersonic regime, where the torque reverses and the binary expands, which would enhance the cross section for planetesimal encounters with the binary. Highly inclined binaries with small mass ratios, evolve due to the combined effects of Kozai–Lidov (KL) cycles with GDF which lead to chaotic evolution. Prograde binaries go through semi-regular KL evolution, while retrograde binaries frequently flip their inclination and ∼50% of them are destroyed.« less

  7. Wolf-Rayet stars, black holes and the first detected gravitational wave source

    NASA Astrophysics Data System (ADS)

    Bogomazov, A. I.; Cherepashchuk, A. M.; Lipunov, V. M.; Tutukov, A. V.

    2018-01-01

    The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes (Masevich and Tutukov, 1988). To study the evolution of the GW150914 predecessor we use the ;Scenario Machine; code presented by Lipunov et al. (1996). The scenario modeling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be 100÷140M⊙ and the initial separation of the components can be 50÷350R⊙. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modeling based on it.

  8. Fringe image processing based on structured light series

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Li, Hongyan

    2009-11-01

    The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.

  9. Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Alias, Siti Nor Shadila

    2014-07-01

    For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..

  10. Mapping the Milky Way Galaxy with LISA

    NASA Technical Reports Server (NTRS)

    McKinnon, Jose A.; Littenberg, Tyson

    2012-01-01

    Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way

  11. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  12. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  13. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  14. SETI-EC: SETI Encryption Code

    NASA Astrophysics Data System (ADS)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  15. Flexible digital modulation and coding synthesis for satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John

    1991-01-01

    An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.

  16. Hele-Shaw scaling properties of low-contrast Saffman-Taylor flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiFrancesco, M. W.; Maher, J. V.

    1989-07-01

    We have measured variations of Saffman-Taylor flows by changingdimensionless surface tension /ital B/ alone and by changing /ital B/ inconjunction with changes in dimensionless viscosity contrast /ital A/. Ourlow-aspect-ratio cell permits close study of the linear- and earlynonlinear-flow regimes. Our critical binary-liquid sample allows study of verylow values of /ital A/. The predictions of linear stability analysis work wellfor predicting which length scales are important, but discrepancies areobserved for growth rates. We observe an empirical scaling law for growth ofthe Fourier modes of the patterns in the linear regime. The observed frontpropagation velocity for side-wall disturbances is constantly 2+-1in dimensionlessmore » units, a value consistent with the predictions of Langer andof van Saarloos. Patterns in both the linear and nonlinear regimes collapseimpressively under the scaling suggested by the Hele-Shaw equations. Violationsof scaling due to wetting phenomena are not evident here, presumably becausethe wetting properties of the two phases of the critical binary liquid are sosimilar; thus direct comparison with large-scale Hele-Shaw simulations shouldbe meaningful.« less

  17. Periodic binary sequence generators: VLSI circuits considerations

    NASA Technical Reports Server (NTRS)

    Perlman, M.

    1984-01-01

    Feedback shift registers are efficient periodic binary sequence generators. Polynomials of degree r over a Galois field characteristic 2(GF(2)) characterize the behavior of shift registers with linear logic feedback. The algorithmic determination of the trinomial of lowest degree, when it exists, that contains a given irreducible polynomial over GF(2) as a factor is presented. This corresponds to embedding the behavior of an r-stage shift register with linear logic feedback into that of an n-stage shift register with a single two-input modulo 2 summer (i.e., Exclusive-OR gate) in its feedback. This leads to Very Large Scale Integrated (VLSI) circuit architecture of maximal regularity (i.e., identical cells) with intercell communications serialized to a maximal degree.

  18. Binary power multiplier for electromagnetic energy

    DOEpatents

    Farkas, Zoltan D.

    1988-01-01

    A technique for converting electromagnetic pulses to higher power amplitude and shorter duration, in binary multiples, splits an input pulse into two channels, and subjects the pulses in the two channels to a number of binary pulse compression operations. Each pulse compression operation entails combining the pulses in both input channels and selectively steering the combined power to one output channel during the leading half of the pulses and to the other output channel during the trailing half of the pulses, and then delaying the pulse in the first output channel by an amount equal to half the initial pulse duration. Apparatus for carrying out each of the binary multiplication operation preferably includes a four-port coupler (such as a 3 dB hybrid), which operates on power inputs at a pair of input ports by directing the combined power to either of a pair of output ports, depending on the relative phase of the inputs. Therefore, by appropriately phase coding the pulses prior to any of the pulse compression stages, the entire pulse compression (with associated binary power multiplication) can be carried out solely with passive elements.

  19. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  20. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  1. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  2. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  3. On the Existence of t-Identifying Codes in Undirected De Bruijn Networks

    DTIC Science & Technology

    2015-08-04

    remaining cases remain open. Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 15. SUBJECT TERMS...Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 1 Introduction and Background Let x ∈ V (G), and...we must have d(y, x) = n + 2. In other words, Theorem 2.5 tells us the eccentricity of every node in the graph B(d, n) is n for d ≥ 3, and so the

  4. Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs

    DTIC Science & Technology

    2009-12-01

    50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE

  5. Research on odor interaction between aldehyde compounds via a partial differential equation (PDE) model.

    PubMed

    Yan, Luchun; Liu, Jiemin; Qu, Chen; Gu, Xingye; Zhao, Xia

    2015-01-28

    In order to explore the odor interaction of binary odor mixtures, a series of odor intensity evaluation tests were performed using both individual components and binary mixtures of aldehydes. Based on the linear relation between the logarithm of odor activity value and odor intensity of individual substances, the relationship between concentrations of individual constituents and their joint odor intensity was investigated by employing a partial differential equation (PDE) model. The obtained results showed that the binary odor interaction was mainly influenced by the mixing ratio of two constituents, but not the concentration level of an odor sample. Besides, an extended PDE model was also proposed on the basis of the above experiments. Through a series of odor intensity matching tests for several different binary odor mixtures, the extended PDE model was proved effective at odor intensity prediction. Furthermore, odorants of the same chemical group and similar odor type exhibited similar characteristics in the binary odor interaction. The overall results suggested that the PDE model is a more interpretable way of demonstrating the odor interactions of binary odor mixtures.

  6. 17 CFR 232.11 - Definition of terms used in part 232.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., PDF, and static graphic files. Such code may be in binary (machine language) or in script form... Act means the Trust Indenture Act of 1939. Unofficial PDF copy. The term unofficial PDF copy means an...

  7. A direct temporal domain approach for ultrafast optical signal processing and its implementation using planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Xia, Bing

    Ultrafast optical signal processing, which shares the same fundamental principles of electrical signal processing, can realize numerous important functionalities required in both academic research and industry. Due to the extremely fast processing speed, all-optical signal processing and pulse shaping have been widely used in ultrafast telecommunication networks, photonically-assisted RFlmicro-meter waveform generation, microscopy, biophotonics, and studies on transient and nonlinear properties of atoms and molecules. In this thesis, we investigate two types of optical spectrally-periodic (SP) filters that can be fabricated on planar lightwave circuits (PLC) to perform pulse repetition rate multiplication (PRRM) and arbitrary optical waveform generation (AOWG). First, we present a direct temporal domain approach for PRRM using SP filters. We show that the repetition rate of an input pulse train can be multiplied by a factor N using an optical filter with a free spectral range that does not need to be constrained to an integer multiple of N. Furthermore, the amplitude of each individual output pulse can be manipulated separately to form an arbitrary envelope at the output by optimizing the impulse response of the filter. Next, we use lattice-form Mach-Zehnder interferometers (LF-MZI) to implement the temporal domain approach for PRRM. The simulation results show that PRRM with uniform profiles, binary-code profiles and triangular profiles can be achieved. Three silica based LF-MZIs are designed and fabricated, which incorporate multi-mode interference (MMI) couplers and phase shifters. The experimental results show that 40 GHz pulse trains with a uniform envelope pattern, a binary code pattern "1011" and a binary code pattern "1101" are generated from a 10 GHz input pulse train. Finally, we investigate 2D ring resonator arrays (RRA) for ultraf ast optical signal processing. We design 2D RRAs to generate a pair of pulse trains with different binary-code patterns simultaneously from a single pulse train at a low repetition rate. We also design 2D RRAs for AOWG using the modified direct temporal domain approach. To demonstrate the approach, we provide numerical examples to illustrate the generation of two very different waveforms (square waveform and triangular waveform) from the same hyperbolic secant input pulse train. This powerful technique based on SP filters can be very useful for ultrafast optical signal processing and pulse shaping.

  8. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  9. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  10. Real coded genetic algorithm for fuzzy time series prediction

    NASA Astrophysics Data System (ADS)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  11. Formal specification and verification of Ada software

    NASA Technical Reports Server (NTRS)

    Hird, Geoffrey R.

    1991-01-01

    The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.

  12. High Fill-out, Extreme Mass Ratio Overcontact Binary Systems. X. The Newly Discovered Binary XY Leonis Minoris

    NASA Astrophysics Data System (ADS)

    Qian, S.-B.; Liu, L.; Zhu, L.-Y.; He, J.-J.; Yang, Y.-G.; Bernasconi, L.

    2011-05-01

    The newly discovered short-period close binary star, XY LMi, has been monitored photometrically since 2006. Its light curves are typical EW-type light curves and show complete eclipses with durations of about 80 minutes. Photometric solutions were determined through an analysis of the complete B, V, R, and I light curves using the 2003 version of the Wilson-Devinney code. XY LMi is a high fill-out, extreme mass ratio overcontact binary system with a mass ratio of q = 0.148 and a fill-out factor of f = 74.1%, suggesting that it is in the late evolutionary stage of late-type tidal-locked binary stars. As observed in other overcontact binary stars, evidence for the presence of two dark spots on both components is given. Based on our 19 epochs of eclipse times, we found that the orbital period of the overcontact binary is decreasing continuously at a rate of dP/dt = -1.67 × 10-7 days yr-1, which may be caused by mass transfer from the primary to the secondary and/or angular momentum loss via magnetic stellar wind. The decrease of the orbital period may result in the increase of the fill-out, and finally, it will evolve into a single rapid-rotation star when the fluid surface reaches the outer critical Roche lobe.

  13. Fundamental finite key limits for one-way information reconciliation in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Martinez-Mateo, Jesus; Pacher, Christoph; Elkouss, David

    2017-11-01

    The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that one-way information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during information reconciliation is not generally valid. We propose an improved approximation that takes into account finite key effects and numerically test it against codes for two probability distributions, that we call binary-binary and binary-Gaussian, that typically appear in quantum key distribution protocols.

  14. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  15. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Soumya D.; Nayak, Rajesh K.

    The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less

  17. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  18. Analysis of Binary Adherence Data in the Setting of Polypharmacy: A Comparison of Different Approaches

    PubMed Central

    Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.

    2009-01-01

    Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358

  19. Monte Carlo Particle Lists: MCPL

    NASA Astrophysics Data System (ADS)

    Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.

    2017-09-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  20. The Early Development of Programmable Machinery.

    ERIC Educational Resources Information Center

    Collins, Martin D.

    1985-01-01

    Programmable equipment innovations, precursors of today's technology, are examined, including the development of the binary code and feedback control systems, such as temperature sensing devices, interchangeable parts, punched cards carrying instructions, continuous flow oil refining process, assembly lines for mass production, and the…

  1. Ffuzz: Towards full system high coverage fuzz testing on binary executables

    PubMed Central

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool—Ffuzz—on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently. PMID:29791469

  2. Photometric studies of two solar type marginal contact binaries in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Shanti Priya, Devarapalli; Rukmini, Jagirdar

    2018-04-01

    Using the Optical Gravitational Lensing Experiment catalogue, two contact binaries were studied using data in the V and I bands. The photometric solutions for the V and I bands are presented for two contact binaries OGLE 003835.24-735413.2 (V1) and OGLE 004619.65-725056.2 (V2) in Small Maglellanic Cloud. The presented light curves are analyzed using the Wilson-Devinney code. The results show that the variables are in good thermal and marginal geometrical contact with features like the O’Connell effect in V1. The absolute dimensions are estimated and its dynamical evolution is inferred. They tend to be solar type marginal contact binaries. The 3.6-m Devasthal Optical Telescope and the 4.0-m International Liquid Mirror Telescope of the Aryabhatta Research Institute of Observational Sciences (ARIES, Nainithal) can facilitate the continuous monitoring of such kind of objects which will help in finding the reasons behind their period changes and their impact on the evolution of the clusters.

  3. Respecting variations in embodiment as well as gender: Beyond the presumed 'binary' of sex.

    PubMed

    Saewyc, Elizabeth M

    2017-01-01

    Although societies and health care systems are increasingly recognizing gender outside traditional binary categories, the notion persists of two, and only two sexes, 'naturally' aligned between chromosomes and phenotypic body. Yet there are more than a dozen documented genetic or phenotypic variations that do not completely fit the two simplistic categories, and together, they may comprise 1%-2% of the population worldwide. In this commentary, I consider how adherence to binary notions of sex has created and maintained social and health care structures that perpetuate health care inequities, and may well violate our nursing codes of ethics. I provide some current examples in law and health care systems that create difficulties for people with variations in sex development. I describe our responsibility to challenge the societally promoted but scientifically inaccurate perspective of sex as a binary. I conclude by briefly suggesting a few implications for action within nursing research, nursing education, nursing practice, and in advocacy as a profession. © 2017 John Wiley & Sons Ltd.

  4. Episodic accretion in binary protostars emerging from self-gravitating solar mass cores

    NASA Astrophysics Data System (ADS)

    Riaz, R.; Vanaverbeke, S.; Schleicher, D. R. G.

    2018-06-01

    Observations show a large spread in the luminosities of young protostars, which are frequently explained in the context of episodic accretion. We tested this scenario with numerical simulations that follow the collapse of a solar mass molecular cloud using the GRADSPH code, thereby varying the strength of the initial perturbations and temperature of the cores. A specific emphasis of this paper is to investigate the role of binaries and multiple systems in the context of episodic accretion and to compare their evolution to the evolution in isolated fragments. Our models form a variety of low-mass protostellar objects including single, binary, and triple systems in which binaries are more active in exhibiting episodic accretion than isolated protostars. We also find a general decreasing trend in the average mass accretion rate over time, suggesting that the majority of the protostellar mass is accreted within the first 105 years. This result can potentially help to explain the surprisingly low average luminosities in the majority of the protostellar population.

  5. New variable stars discovered in the fields of three Galactic open clusters using the VVV survey

    NASA Astrophysics Data System (ADS)

    Palma, T.; Minniti, D.; Dékány, I.; Clariá, J. J.; Alonso-García, J.; Gramajo, L. V.; Ramírez Alegría, S.; Bonatto, C.

    2016-11-01

    This project is a massive near-infrared (NIR) search for variable stars in highly reddened and obscured open cluster (OC) fields projected on regions of the Galactic bulge and disk. The search is performed using photometric NIR data in the J-, H- and Ks- bands obtained from the Vista Variables in the Vía Láctea (VVV) Survey. We performed in each cluster field a variability search using Stetson's variability statistics to select the variable candidates. Later, those candidates were subjected to a frequency analysis using the Generalized Lomb-Scargle and the Phase Dispersion Minimization algorithms. The number of independent observations range between 63 and 73. The newly discovered variables in this study, 157 in total in three different known OCs, are classified based on their light curve shapes, periods, amplitudes and their location in the corresponding color-magnitude (J -Ks ,Ks) and color-color (H -Ks , J - H) diagrams. We found 5 possible Cepheid stars which, based on the period-luminosity relation, are very likely type II Cepheids located behind the bulge. Among the newly discovered variables, there are eclipsing binaries, δ Scuti, as well as background RR Lyrae stars. Using the new version of the Wilson & Devinney code as well as the "Physics Of Eclipsing Binaries" (PHOEBE) code, we analyzed some of the best eclipsing binaries we discovered. Our results show that these studied systems turn out to be ranging from detached to double-contact binaries, with low eccentricities and high inclinations of approximately 80°. Their surface temperatures range between 3500 K and 8000 K.

  6. Spectral characteristics of convolutionally coded digital signals

    NASA Technical Reports Server (NTRS)

    Divsalar, D.

    1979-01-01

    The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.

  7. Binary Code Extraction and Interface Identification for Security Applications

    DTIC Science & Technology

    2009-10-02

    the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is

  8. Input data requirements for special processors in the computation system containing the VENTURE neutronics code. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.

  9. New coding technique for computer generated holograms.

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  10. Nonequilibrium Concentration Fluctuations in Binary Liquid Systems Induced by the Soret Effect

    NASA Astrophysics Data System (ADS)

    Sengers, Jan V.; Ortiz de Zárate, José M.

    When a binary liquid system is brought into a stationary thermal nonequilibrium state by the imposition of a temperature gradient, the Soret effect induces long-range concentration fluctuations even in the absence of any convective instability. The physical origin of the nonequilibrium concentration fluctuations is elucidated and it is shown how the intensity of these concentration fluctuations can be derived from the linearized random Boussinesq equations. Relevant experimental inform ation is also discussed.

  11. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  12. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  13. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  14. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  15. Implementing the LIM code: the structural basis for cell type-specific assembly of LIM-homeodomain complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhati, Mugdha; Lee, Christopher; Nancarrow, Amy L.

    2008-09-03

    LIM-homeodomain (LIM-HD) transcription factors form a combinatorial 'LIM code' that contributes to the specification of cell types. In the ventral spinal cord, the binary LIM homeobox protein 3 (Lhx3)/LIM domain-binding protein 1 (Ldb1) complex specifies the formation of V2 interneurons. The additional expression of islet-1 (Isl1) in adjacent cells instead specifies the formation of motor neurons through assembly of a ternary complex in which Isl1 contacts both Lhx3 and Ldb1, displacing Lhx3 as the binding partner of Ldb1. However, little is known about how this molecular switch occurs. Here, we have identified the 30-residue Lhx3-binding domain on Isl1 (Isl1{sub LBD}).more » Although the LIM interaction domain of Ldb1 (Ldb1{sub LID}) and Isl1{sub LBD} share low levels of sequence homology, X-ray and NMR structures reveal that they bind Lhx3 in an identical manner, that is, Isl1{sub LBD} mimics Ldb1{sub LID}. These data provide a structural basis for the formation of cell type-specific protein-protein interactions in which unstructured linear motifs with diverse sequences compete to bind protein partners. The resulting alternate protein complexes can target different genes to regulate key biological events.« less

  16. Simultaneous message framing and error detection

    NASA Technical Reports Server (NTRS)

    Frey, A. H., Jr.

    1968-01-01

    Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang Quntao; Gao Xun; Yu Qingjuan, E-mail: yuqj@pku.edu.cn

    In this paper, we study possible signatures of binary planets or exomoons on the Rossiter-McLaughlin (R-M) effect. Our analyses show that the R-M effect for a binary planet or an exomoon during its complete transit phase can be divided into two parts. The first is the conventional one similar to the R-M effect from the transit of a single planet, of which the mass and the projected area are the combinations of the binary components; the second is caused by the orbital rotation of the binary components, which may add a sine- or linear-mode deviation to the stellar radial velocitymore » curve. We find that the latter effect can be up to several ten m s{sup -1}. Our numerical simulations as well as analyses illustrate that the distribution and dispersion of the latter effects obtained from multiple transit events can be used to constrain the dynamical configuration of the binary planet, such as how the inner orbit of the binary planet is inclined to its orbit rotating around the central star. We find that the signatures caused by the orbital rotation of the binary components are more likely to be revealed if the two components of a binary planet have different masses and mass densities, especially if the heavy one has a high mass density and the light one has a low density. Similar signatures on the R-M effect may also be revealed in a hierarchical triple star system containing a dark compact binary and a tertiary star.« less

  18. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  19. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  20. Optical analogue of relativistic Dirac solitons in binary waveguide arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tran, Truong X., E-mail: truong.tran@mpl.mpg.de; Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen; Longhi, Stefano

    2014-01-15

    We study analytically and numerically an optical analogue of Dirac solitons in binary waveguide arrays in the presence of Kerr nonlinearity. Pseudo-relativistic soliton solutions of the coupled-mode equations describing dynamics in the array are analytically derived. We demonstrate that with the found soliton solutions, the coupled mode equations can be converted into the nonlinear relativistic 1D Dirac equation. This paves the way for using binary waveguide arrays as a classical simulator of quantum nonlinear effects arising from the Dirac equation, something that is thought to be impossible to achieve in conventional (i.e. linear) quantum field theory. -- Highlights: •An opticalmore » analogue of Dirac solitons in nonlinear binary waveguide arrays is suggested. •Analytical solutions to pseudo-relativistic solitons are presented. •A correspondence of optical coupled-mode equations with the nonlinear relativistic Dirac equation is established.« less

  1. Observations, Roche Lobe Analysis, and Period Study of the Eclipsing Contact Binary System GM Canum Venaticorum

    NASA Astrophysics Data System (ADS)

    Alton, K. B.; Nelson, R. H.

    2018-06-01

    GM CVn is an eclipsing W UMa binary system (P = 0.366984 d) which has been largely neglected since its variability was first detected during the ROTSE campaign (1999-2000). Other than a single unfiltered light curve (LC) no other photometric data have been published. LC data collected in three bandpasses (B, V, and Rc) at UnderOak Observatory (UO) produced three new times of minimum for GM CVn. These along with other eclipse timings from the literature were used to update the linear ephemeris. Roche modeling to produce synthetic LC fits to the observed data was accomplished using binary maker 3, wdwint56a, and phoebe v.31a. Newly acquired radial velocity data were pivotal to defining the absolute and geometric parameters for this partially eclipsing binary system. An unspotted solution achieved the best Roche model fits for the multi-color LCs collected in 2013.

  2. Nearly ideal binary communication in squeezed channels

    NASA Astrophysics Data System (ADS)

    Paris, Matteo G.

    2001-07-01

    We analyze the effect of squeezing the channel in binary communication based on Gaussian states. We show that for coding on pure states, squeezing increases the detection probability at fixed size of the strategy, actually saturating the optimal bound already for moderate signal energy. Using Neyman-Pearson lemma for fuzzy hypothesis testing we are able to analyze also the case of mixed states, and to find the optimal amount of squeezing that can be effectively employed. It results that optimally squeezed channels are robust against signal mixing, and largely improve the strategy power by comparison with coherent ones.

  3. Combinatorics associated with inflections and bitangents of plane quartics

    NASA Astrophysics Data System (ADS)

    Gizatullin, M. Kh

    2013-08-01

    After a preliminary survey and a description of some small Steiner systems from the standpoint of the theory of invariants of binary forms, we construct a binary Golay code (of length 24) using ideas from J. Grassmann's thesis of 1875. One of our tools is a pair of disjoint Fano planes. Another application of such pairs and properties of plane quartics is a construction of a new block design on 28 objects. This block design is a part of a dissection of the set of 288 Aronhold sevens. The dissection distributes the Aronhold sevens into 8 disjoint block designs of this type.

  4. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  5. Simulations of Tidally Driven Formation of Binary Planet Systems

    NASA Astrophysics Data System (ADS)

    Murray, R. Zachary P.; Guillochon, James

    2018-01-01

    In the last decade there have been hundreds of exoplanets discovered by the Kepler, CoRoT and many other initiatives. This wealth of data suggests the possibility of detecting exoplanets with large satellites. This project seeks to model the interactions between orbiting planets using the FLASH hydrodynamics code developed by The Flash Center for Computational Science at University of Chicago. We model the encounters in a wide variety of encounter scenarios and initial conditions including variations in encounter depth, mass ratio, and encounter velocity and attempt to constrain what sorts of binary planet configurations are possible and stable.

  6. Pairing Heterocyclic Cations with closo-dodecafluorododecaborate (2-) Synthesis of Binary Heterocyclium (1+) Salts and a Ag4(heterocycle)8(4+) Salt of B12F12(2-)

    DTIC Science & Technology

    2011-01-01

    10989). 13. SUPPLEMENTARY NOTES Journal article published in the Journal of Fluorine Chemistry, Vol. 132, Nov 2011. PA Case Number: 10989...TELEPHONE NUMBER (include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Journal of Fluorine Chemistry 132 (2011... Fluorine Chemistry jo ur n al h o mep ag e: www .e lsev ier . c om / loc ate / f luo r1. Introduction Eight new binary salts that pair the icosahedral

  7. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  8. \\Space: A new code to estimate \\temp, \\logg, and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.

    2016-09-01

    \\Space is a FORTRAN95 code that derives stellar parameters and elemental abundances from stellar spectra. To derive these parameters, \\Space does not measure equivalent widths of lines nor it uses templates of synthetic spectra, but it employs a new method based on a library of General Curve-Of-Growths. To date \\Space works on the wavelength range 5212-6860 Å and 8400-8921 Å, and at the spectral resolution R=2000-20000. Extensions of these limits are possible. \\Space is a highly automated code suitable for application to large spectroscopic surveys. A web front end to this service is publicly available at http://dc.g-vo.org/SP_ACE together with the library and the binary code.

  9. Effect of a Nonplanar Melt-Solid Interface On Lateral Compositional Distribution During Unidirectional Solidification of a Binary Alloy with a Constant Growth Velocity V. Pt. 1; Theory

    NASA Technical Reports Server (NTRS)

    Wang, Jai-Ching; Watring, D.; Lehoczky. S. L.; Su, C. H.; Gillies, D.; Szofran, F.; Sha, Y. G.; Sha, Y. G.

    1999-01-01

    Infrared detected materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to their composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregation in both of the axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms, which affect lateral segregation such that large radially uniform composition crystal can be produced. Following Coriel, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on lateral composition distribution. The model is considered to be a cylindrical system with azimuthal symmetry and a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominant effect on the lateral composition distribution of these systems. For small values of beta, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the produce of beta and (1 -k), where beta = VR/D, with V as growth velocity, R as the sample radius, D as the diffusion constant and k as the distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.

  10. Effect of a Nonplanar Melt-Solid Interface on Lateral Compositional Distribution during Unidirectional Solidification of a Binary Alloy with a Constant Growth Velocity V. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Wang, Jai-Ching; Watring, Dale A.; Lehoczky, Sandor L.; Su, Ching-Hua; Gillies, Don; Szofran, Frank

    1999-01-01

    Infrared detector materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to its composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregations in both of axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms which affect lateral segregation such that large uniform radial composition crystal is possible. Following Coriell, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on the lateral composition distribution. The system is considered to be cylindrical system with azimuthal symmetric with a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominate effect on lateral composition distribution of these systems. For small values of b, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the product of b and (1 - k), where b = VR/D, with V as growth velocity, R as sample radius, D as diffusion constant and k as distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.

  11. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  12. Electro-Optic Analog/Digital Converter.

    DTIC Science & Technology

    electro - optic material and a source of linearly polarized light is arranged to transmit its light energy along each of the optical waveguides. Electrodes are disposed contiguous to the optical waveguides for impressing electric fields thereacross. An input signal potential is applied to the electrodes to produce electric fields of intensity relative to each of the waveguides such that causes phase shift and resultant change of polarization which can be detected as representative of a binary ’one’ or binary ’zero’ for each of the channel optical

  13. Soret motion in non-ionic binary molecular mixtures

    NASA Astrophysics Data System (ADS)

    Leroyer, Yves; Würger, Alois

    2011-08-01

    We study the Soret coefficient of binary molecular mixtures with dispersion forces. Relying on standard transport theory for liquids, we derive explicit expressions for the thermophoretic mobility and the Soret coefficient. Their sign depends on composition, the size ratio of the two species, and the ratio of Hamaker constants. Our results account for several features observed in experiment, such as a linear variation with the composition; they confirm the general rule that small molecules migrate to the warm, and large ones to the cold.

  14. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  15. Combining multiple decisions: applications to bioinformatics

    NASA Astrophysics Data System (ADS)

    Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.

  16. Using an Iterative Fourier Series Approach in Determining Orbital Elements of Detached Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2007-12-01

    We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.

  17. Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.

    PubMed

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.

  18. Absolute and geometric parameters of contact binary GW Cnc

    NASA Astrophysics Data System (ADS)

    Gürol, B.; Gökay, G.; Saral, G.; Gürsoytrak, S. H.; Cerit, S.; Terzioğlu, Z.

    2016-07-01

    We present the results of our investigation on the geometrical and physical parameters of the W UMa type binary system GW Cnc. We analyzed the photometric data obtained in 2010 and 2011 at Ankara University Observatory (AUO) and the spectroscopic data obtained in 2010 at TUBITAK National Observatory (TUG) by using the Wilson-Devinney (2013 revision) code to obtain the absolute and geometrical parameters. We derived masses and radii of the eclipsing system to be M1 = 0.257M⊙ , M2 = 0.971M⊙ , R1 = 0.526R⊙ and R2 = 0.961R⊙ with an orbital inclination i(∘) = 83.38 ± 0.25 and we determined the GW Cnc system to be a W-type W UMa over-contact binary with a mass ratio of q = 3.773 ± 0.007 .

  19. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  20. Laser Linewidth Requirements for Optical Bpsk and Qpsk Heterodyne Lightwave Systems.

    NASA Astrophysics Data System (ADS)

    Boukli-Hacene, Mokhtar

    In this dissertation, optical Binary Phase-Shift Keying (BPSK) and Quadrature Phase-Shift Keying (QPSK) heterodyne communication receivers are investigated. The main objective of this research work is to analyze the performance of these receivers in the presence of laser phase noise and shot noise. The heterodyne optical BPSK is based on the square law carrier recovery (SLCR) scheme for phase detection. The BPSK heterodyne receiver is analyzed assuming a second order linear phase-locked loop (PLL) subsystem and a small phase error. The noise properties are analyzed and the problem of minimizing the effect of noise is addressed. The performance of the receiver is evaluated in terms of the bit error rate (BER), which leads to the analysis of the BER versus the laser linewidth and the number of photons/bit to achieve good performance. Since we cannot track the pure carrier component in the presence of noise, a non-linear model is used to solve the problem of recovery of the carrier. The non -linear system is analyzed in the presence of a low signal -to-noise ratio (SNR). The non-Gaussian noise model represented by its probability density function (PDF) is used to analyze the performance of the receiver, especially the phase error. In addition the effect of the PLL is analyzed by studying the cycle slippage (cs). Finally, the research effort is expanded from BPSK to QPSK systems. The heterodyne optical QPSK based on the fourth power multiplier scheme (FPMS) in conjunction with linear and non-linear PLL model is investigated. Optimum loop and higher power penalty in the presence of phase noise and shot noise are analyzed. It is shown that the QPSK system yields a high speed and high sensitivity coherent means for transmission of information accompanied by a small degradation in the laser linewidth. Comparative analysis of BPSK and QPSK systems leads us to conclude that in terms of laser linewidth, bit rate, phase error and power penalty, the QPSK system is more sensitive than the BPSK system and suffers less from higher power penalty. The BPSK and QPSK heterodyne receivers used in the uncoded scheme demand a realistic laser linewidth. Since the laser linewidth is the critical measure of the performance of a receiver, a convolutional code applied to QPSK of the system is used to improve the sensitivity of the system. The effect of coding is particularly important as means of relaxing the laser linewidth requirement. The validity and usefulness of the analysis presented in the dissertation is supported by computer simulations.

  1. Nanoporous hard data: optical encoding of information within nanoporous anodic alumina photonic crystals

    NASA Astrophysics Data System (ADS)

    Santos, Abel; Law, Cheryl Suwen; Pereira, Taj; Losic, Dusan

    2016-04-01

    Herein, we present a method for storing binary data within the spectral signature of nanoporous anodic alumina photonic crystals. A rationally designed multi-sinusoidal anodisation approach makes it possible to engineer the photonic stop band of nanoporous anodic alumina with precision. As a result, the transmission spectrum of these photonic nanostructures can be engineered to feature well-resolved and selectively positioned characteristic peaks across the UV-visible spectrum. Using this property, we implement an 8-bit binary code and assess the versatility and capability of this system by a series of experiments aiming to encode different information within the nanoporous anodic alumina photonic crystals. The obtained results reveal that the proposed nanosized platform is robust, chemically stable, versatile and has a set of unique properties for data storage, opening new opportunities for developing advanced nanophotonic tools for a wide range of applications, including sensing, photonic tagging, self-reporting drug releasing systems and secure encoding of information.Herein, we present a method for storing binary data within the spectral signature of nanoporous anodic alumina photonic crystals. A rationally designed multi-sinusoidal anodisation approach makes it possible to engineer the photonic stop band of nanoporous anodic alumina with precision. As a result, the transmission spectrum of these photonic nanostructures can be engineered to feature well-resolved and selectively positioned characteristic peaks across the UV-visible spectrum. Using this property, we implement an 8-bit binary code and assess the versatility and capability of this system by a series of experiments aiming to encode different information within the nanoporous anodic alumina photonic crystals. The obtained results reveal that the proposed nanosized platform is robust, chemically stable, versatile and has a set of unique properties for data storage, opening new opportunities for developing advanced nanophotonic tools for a wide range of applications, including sensing, photonic tagging, self-reporting drug releasing systems and secure encoding of information. Electronic supplementary information (ESI) available: Further details about anodisation profiles, SEM cross-section images, digital pictures, transmission spectra, photonic barcodes and ASCII codes of the different NAA photonic crystals fabricated and analysed in our study. See DOI: 10.1039/c6nr01068g

  2. Multi-Messenger Astronomy: White Dwarf Binaries, LISA and GAIA

    NASA Astrophysics Data System (ADS)

    Bueno, Michael; Breivik, Katelyn; Larson, Shane L.

    2017-01-01

    The discovery of gravitational waves has ushered in a new era in astronomy. The low-frequency band covered by the future LISA detector provides unprecedented opportunities for multi-messenger astronomy. With the Global Astrometric Interferometer for Astrophysics (GAIA) mission, we expect to discover about 1,000 eclipsing binary systems composed of a WD and a main sequence star - a sizeable increase from the approximately 34 currently known binaries of this type. In advance of the first GAIA data release and the launch of LISA within the next decade, we used the Binary Stellar Evolution (BSE) code simulate the evolution of White Dwarf Binaries (WDB) in a fixed galaxy population of about 196,000 sources. Our goal is to assess the detectability of a WDB by LISA and GAIA using the parameters from our population synthesis, we calculate GW strength h, and apparent GAIA magnitude G. We can then use a scale factor to make a prediction of how many multi- messenger sources we expect to be detectable by both LISA and GAIA in a galaxy the size of the Milky Way. We create binaries 10 times to ensure randomness in distance assignment and average our results. We then determined whether or not astronomical chirp is the difference between the total chirp and the GW chirp. With Astronomical chirp and simulations of mass transfer and tides, we can gather more information about the internal astrophysics of stars in ultra-compact binary systems.

  3. An adjoint-based framework for maximizing mixing in binary fluids

    NASA Astrophysics Data System (ADS)

    Eggl, Maximilian; Schmid, Peter

    2017-11-01

    Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.

  4. RX GEMINORUM: PHOTOMETRIC SOLUTIONS, (NEARLY UNIFORM) GAINER ROTATION, DONOR RADIAL VELOCITY SOLUTION, NON-LTE ACCRETION DISK MODELS OF Hα EMISSION PROFILES, AND SECULAR LIGHT CURVE CHANGES IN THE 20TH CENTURY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Edward C.; Etzel, Paul B., E-mail: olsoneco@aol.com, E-mail: pbetzel@mail.sdsu.edu

    We obtained full-orbit Iybvu intermediate-band photometry and CCD spectroscopy of the long-period Algol eclipsing binary RX Geminorum. Photometric solutions using the Wilson–Devinney code give a gainer rotation (hotter, mass-accreting component) about 15 times the synchronous rate. We describe a simple technique to detect departures from uniform rotation of the hotter component. These binaries radiate double-peaked Hα emission from a low-mass accretion disk around the gainer. We used an approximate non-LTE disk code to predict models in fair agreement with observations, except in the far wings of the emission profile, where the star–inner disk boundary layer emits extra radiation. Variations inmore » Hα emission derive from modulations in the transfer rate. A study of times of minima during the 20th century suggests that a perturbing third body is present near RX Gem.« less

  5. Robust Modeling of Stellar Triples in PHOEBE

    NASA Astrophysics Data System (ADS)

    Conroy, Kyle E.; Prsa, Andrej; Horvat, Martin; Stassun, Keivan G.

    2017-01-01

    The number of known mutually-eclipsing stellar triple and multiple systems has increased greatly during the Kepler era. These systems provide significant opportunities to both determine fundamental stellar parameters of benchmark systems to unprecedented precision as well as to study the dynamical interaction and formation mechanisms of stellar and planetary systems. Modeling these systems to their full potential, however, has not been feasible until recently. Most existing available codes are restricted to the two-body binary case and those that do provide N-body support for more components make sacrifices in precision by assuming no stellar surface distortion. We have completely redesigned and rewritten the PHOEBE binary modeling code to incorporate support for triple and higher-order systems while also robustly modeling data with Kepler precision. Here we present our approach, demonstrate several test cases based on real data, and discuss the current status of PHOEBE's support for modeling these types of systems. PHOEBE is funded in part by NSF grant #1517474.

  6. Robust GRMHD Evolutions of Merging Black-Hole Binaries in Magnetized Plasma

    NASA Astrophysics Data System (ADS)

    Kelly, Bernard; Etienne, Zachariah; Giacomazzo, Bruno; Baker, John

    2016-03-01

    Black-hole binary (BHB) mergers are expected to be powerful sources of gravitational radiation at stellar and galactic scales. A typical astrophysical environment for these mergers will involve magnetized plasmas accreting onto each hole; the strong-field gravitational dynamics of the merger may churn this plasma in ways that produce characteristic electromagnetic radiation visible to high-energy EM detectors on and above the Earth. Here we return to a cutting-edge GRMHD simulation of equal-mass BHBs in a uniform plasma, originally performed with the Whisky code. Our new tool is the recently released IllinoisGRMHD, a compact, highly-optimized ideal GRMHD code that meshes with the Einstein Toolkit. We establish consistency of IllinoisGRMHD results with the older Whisky results, and investigate the robustness of these results to changes in initial configuration of the BHB and the plasma magnetic field, and discuss the interpretation of the ``jet-like'' features seen in the Poynting flux post-merger. Work supported in part by NASA Grant 13-ATP13-0077.

  7. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  8. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    NASA Astrophysics Data System (ADS)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  9. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  10. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  11. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  12. Study of parameter of nonlinearity in 2-chloroethanol with 2-dimethylethanolamine/2-diethylethanolamine at different temperatures

    NASA Astrophysics Data System (ADS)

    Awasthi, Anjali; Awasthi, Aashees

    2017-06-01

    The acoustic non-linearity parameter (B/A) for binary mixtures of 2-chloroethanol with 2-dimethylethanolamine (2-DMAE) and 2-diethylethanolamine (2-DEAE) are evaluated using Tong Dong, Beyer and Beyer-Tong Dong coefficients at varying concentrations and temperatures ranging from 293.15 to 313.15 K. The nonlinearity parameter is used to calculate various molecular properties such as internal pressure, cohesive energy density, Van der waals' constant, distance of closest approach, diffusion coefficient and rotational correlation time. Additionally, the intermediate quantities like temperature and pressure derivatives of sound velocity and phase shift parameter as a function of temperature are also deduced. The extent of intermolecular interactions, anharmonicity and structural configuration of the binaries under investigation are discussed in terms of excess non-linearity parameter (B/A)E.

  13. Thermodiffusion Coefficient Analysis of n-Dodecane /n-Hexane Mixture at Different Mass Fractions and Pressure Conditions

    NASA Astrophysics Data System (ADS)

    Lizarraga, Ion; Bou-Ali, M. Mounir; Santamaría, C.

    2018-03-01

    In this study, the thermodiffusion coefficient of n-dodecane/n-hexane binary mixture at 25 ∘C mean temperature was determined for several pressure conditions and mass fractions. The experimental technique used to determine the thermodiffusion coefficient was the thermograviational column of cylindrical configuration. In turn, thermophysical properties, such as density, thermal expansion, mass expansion and dynamic viscosity up to 10 MPa were also determined. The results obtained in this work showed a linear relation between the thermophysical properties and the pressure. Thermodiffusion coefficient values confirm a linear effect when the pressure increases. Additionally, a new correlation based on the thermodiffusion coefficient for n C12/n C6 binary mixture at 25 ∘C temperature for any mass fraction and pressures, which reproduces the data within the experimental error, was proposed.

  14. Treatment Effect Estimation Using Nonlinear Two-Stage Instrumental Variable Estimators: Another Cautionary Note.

    PubMed

    Chapman, Cole G; Brooks, John M

    2016-12-01

    To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.

  15. Quantitative measurement of indomethacin crystallinity in indomethacin-silica gel binary system using differential scanning calorimetry and X-ray powder diffractometry.

    PubMed

    Pan, Xiaohong; Julian, Thomas; Augsburger, Larry

    2006-02-10

    Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) methods were developed for the quantitative analysis of the crystallinity of indomethacin (IMC) in IMC and silica gel (SG) binary system. The DSC calibration curve exhibited better linearity than that of XRPD. No phase transformation occurred in the IMC-SG mixtures during DSC measurement. The major sources of error in DSC measurements were inhomogeneous mixing and sampling. Analyzing the amount of IMC in the mixtures using high-performance liquid chromatography (HPLC) could reduce the sampling error. DSC demonstrated greater sensitivity and had less variation in measurement than XRPD in quantifying crystalline IMC in the IMC-SG binary system.

  16. Non-linear hydrodynamical evolution of rotating relativistic stars: numerical methods and code tests

    NASA Astrophysics Data System (ADS)

    Font, José A.; Stergioulas, Nikolaos; Kokkotas, Kostas D.

    2000-04-01

    We present numerical hydrodynamical evolutions of rapidly rotating relativistic stars, using an axisymmetric, non-linear relativistic hydrodynamics code. We use four different high-resolution shock-capturing (HRSC) finite-difference schemes (based on approximate Riemann solvers) and compare their accuracy in preserving uniformly rotating stationary initial configurations in long-term evolutions. Among these four schemes, we find that the third-order piecewise parabolic method scheme is superior in maintaining the initial rotation law in long-term evolutions, especially near the surface of the star. It is further shown that HRSC schemes are suitable for the evolution of perturbed neutron stars and for the accurate identification (via Fourier transforms) of normal modes of oscillation. This is demonstrated for radial and quadrupolar pulsations in the non-rotating limit, where we find good agreement with frequencies obtained with a linear perturbation code. The code can be used for studying small-amplitude or non-linear pulsations of differentially rotating neutron stars, while our present results serve as testbed computations for three-dimensional general-relativistic evolution codes.

  17. Viscosities of Fe Ni, Fe Co and Ni Co binary melts

    NASA Astrophysics Data System (ADS)

    Sato, Yuzuru; Sugisawa, Koji; Aoki, Daisuke; Yamamura, Tsutomu

    2005-02-01

    Viscosities of three binary molten alloys consisting of the iron group elements, Fe, Ni and Co, have been measured by using an oscillating cup viscometer over the entire composition range from liquidus temperatures up to 1600 °C with high precision and excellent reproducibility. The viscosities measured showed good Arrhenius linearity for all the compositions. The viscosities of Fe, Ni and Co as a function of temperature are as follows: \\eqalign{ & \\log \\eta={-}0.6074 + 2493/T\\qquad for\\quad Fe\\\\ & \\log \\eta={-}0.5695 + 2157/T\\qquad for\\quad Ni \\\\ & \\log \\eta={-}0.6620 + 2430/T\\qquad for\\quad Co.} The isothermal viscosities of Fe-Ni and Fe-Co binary melts increase monotonically with increasing Fe content. On the other hand, in Ni-Co binary melt, the isothermal viscosity decreases slightly and then increases with increasing Co. The activation energy of Fe-Co binary melt increased slightly on mixing, and those of Fe-Ni and Ni-Co melts decreased monotonically with increasing Ni content. The above behaviour is discussed based on the thermodynamic properties of the alloys.

  18. Extreme close approaches in hierarchical triple systems with comparable masses

    NASA Astrophysics Data System (ADS)

    Haim, Niv; Katz, Boaz

    2018-06-01

    We study close approaches in hierarchical triple systems with comparable masses using full N-body simulations, motivated by a recent model for type Ia supernovae involving direct collisions of white dwarfs (WDs). For stable hierarchical systems where the inner binary components have equal masses, we show that the ability of the inner binary to achieve very close approaches, where the separation between the components of the inner binary reaches values which are orders of magnitude smaller than the semi-major axis, can be analytically predicted from initial conditions. The rate of close approaches is found to be roughly linear with the mass of the tertiary. The rate increases in systems with unequal inner binaries by a marginal factor of ≲ 2 for mass ratios 0.5 ≤ m1/m2 ≤ 1 relevant for the inner white-dwarf binaries. For an average tertiary mass of ˜0.3M⊙ which is representative of typical M-dwarfs, the chance for clean collisions is ˜1% setting challenging constraints on the collisional model for type Ia's.

  19. The formation of protostellar binaries in primordial minihalos

    NASA Astrophysics Data System (ADS)

    Riaz, R.; Bovino, S.; Vanaverbeke, S.; Schleicher, D. R. G.

    2018-06-01

    The first stars are known to form in primordial gas, either in minihalos with about 106 M⊙ or so-called atomic cooling halos of about 108 M⊙. Simulations have shown that gravitational collapse and disk formation in primordial gas yield dense stellar clusters. In this paper, we focus particularly on the formation of protostellar binary systems, and aim to quantify their properties during the early stage of their evolution. For this purpose, we combine the smoothed particle hydrodynamics code GRADSPH with the astrochemistry package KROME. The GRADSPH-KROME framework is employed to investigate the collapse of primordial clouds in the high-density regime, exploring the fragmentation process and the formation of binary systems. We observe a strong dependence of fragmentation on the strength of the turbulent Mach number M and the rotational support parameter β. Rotating clouds show significant fragmentation, and have produced several Pop. III proto-binary systems. We report maximum and minimum mass accretion rates of 2.31 × 10-1 M⊙ yr-1 and 2.18 × 10-4 M⊙ yr-1. The mass spectrum of the individual Pop III proto-binary components ranges from 0.88 M⊙ to 31.96 M⊙ and has a sensitive dependence on the Mach number M as well as on the rotational parameter β. We also report a range from ˜0.01 to ˜1 for the mass ratio of our proto-binary systems.

  20. Physical Properties and Evolution of the Eclipsing Binary System XZ Canis Minoris

    NASA Astrophysics Data System (ADS)

    Poochaum, R.; Komonjinda, S.; Soonthornthum, B.; Rattanasoon, S.

    2010-07-01

    This research aims to study the eclipse binary system so that its physical properties and evolution can be determined and used as an example to teach high school astronomy. The study of an eclipsing binary system XZ Canis Minoris (XZ CMi) was done at Sirindhorn Observatory, Chiang Mai University using a 0.5-meter reflecting telescope with CCD photometric system (2184×1417 pixel) in B V and R bands of UVB System. The data obtained were used to construct the light curve for each wavelength band and to compute the times of its light minima. New elements were derived using observations with linear to all available minima. As a result, linear ephemeris is HDJmin I = .578 808 948+/-0.000 000 121+2450 515.321 26+/-0.001 07 E, and the new orbital period of XZ CMi is 0.578 808 948+/-0.000 000 121 day. The values obtained were used with the previously published times of minima to get O-C curve of XZ CMi. The result revealed that the orbital period of XZ CMi is continuously decreased at a rate of 0.007 31+/-0.000 57 sec/year. This result indicates that the binary stars are moving closer continuously. From the O-C residuals, there is significant change to indicate the existence of the third body or magnetic activity cycle on the star. However, further analysis of the physical properties of XZ CMi is required.

  1. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  2. Intra-binary Shock Heating of Black Widow Companions

    NASA Astrophysics Data System (ADS)

    Romani, Roger W.; Sanchez, Nicolas

    2016-09-01

    The low-mass companions of evaporating binary pulsars (black widows and similar) are strongly heated on the side facing the pulsar. However, in high-quality photometric and spectroscopic data, the heating pattern does not match that expected for direct pulsar illumination. Here we explore a model where the pulsar power is intercepted by an intra-binary shock (IBS) before heating the low-mass companion. We develop a simple analytic model and implement it in the popular “ICARUS” light curve code. The model is parameterized by the wind momentum ratio β and the companion wind speed {f}v{v}{{orb}}, and assumes that the reprocessed pulsar wind emits prompt particles or radiation to heat the companion surface. We illustrate an interesting range of light curve asymmetries controlled by these parameters. The code also computes the IBS synchrotron emission pattern, and thus can model black widow X-ray light curves. As a test, we apply the results to the high-quality asymmetric optical light curves of PSR J2215+5135; the resulting fit gives a substantial improvement upon direct heating models and produces an X-ray light curve consistent with that seen. The IBS model parameters imply that at the present loss rate, the companion evaporation has a characteristic timescale of {τ }{{evap}}≈ 150 Myr. Still, the model is not fully satisfactory, indicating that there are additional unmodeled physical effects.

  3. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    NASA Astrophysics Data System (ADS)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  4. Non-conservative evolution in Algols: where is the matter?

    NASA Astrophysics Data System (ADS)

    Deschamps, R.; Braun, K.; Jorissen, A.; Siess, L.; Baes, M.; Camps, P.

    2015-05-01

    Context. There is indirect evidence of non-conservative evolutions in Algols. However, the systemic mass-loss rate is poorly constrained by observations and generally set as a free parameter in binary-star evolution simulations. Moreover, systemic mass loss may lead to observational signatures that still need to be found. Aims: Within the "hotspot" ejection mechanism, some of the material that is initially transferred from the companion star via an accretion stream is expelled from the system due to the radiative energy released on the gainer's surface by the impacting material. The objective of this paper is to retrieve observable quantities from this process and to compare them with observations. Methods: We investigate the impact of the outflowing gas and the possible presence of dust grains on the spectral energy distribution (SED). We used the 1D plasma code Cloudy and compared the results with the 3D Monte-Carlo radiative transfer code Skirt for dusty simulations. The circumbinary mass-distribution and binary parameters were computed with state-of-the-art binary calculations done with the Binstar evolution code. Results: The outflowing material reduces the continuum flux level of the stellar SED in the optical and UV. Because of the time-dependence of this effect, it may help to distinguish between different ejection mechanisms. If present, dust leads to observable infrared excesses, even with low dust-to-gas ratios, and traces the cold material at large distances from the star. By searching for this dust emission in the WISE catalogue, we found a small number of Algols showing infrared excesses, among which the two rather surprising objects SX Aur and CZ Vel. We find that some binary B[e] stars show the same strong Balmer continuum as we predict with our models. However, direct evidence of systemic mass loss is probably not observable in genuine Algols, since these systems no longer eject mass through the hotspot mechanism. Furthermore, owing to its high velocity, the outflowing material dissipates in a few hundred years. If hot enough, the hotspot may produce highly ionised species, such as Si iv, and observable characteristics that are typical of W Ser systems. Conclusions: If present, systemic mass loss leads to clear observational imprints. These signatures are not to be found in genuine Algols but in the closely related β Lyraes, W Serpentis stars, double periodic variables, symbiotic Algols, and binary B[e] stars. We emphasise the need for further observations of such objects where systemic mass loss is most likely to occur. Appendices are available in electronic form at http://www.aanda.org

  5. Finite-connectivity spin-glass phase diagrams and low-density parity check codes.

    PubMed

    Migliorini, Gabriele; Saad, David

    2006-02-01

    We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.

  6. Addressing the [O III] / Hβ offset in metal poor star forming galaxies found in the RESOLVE survey and ECO catalog

    NASA Astrophysics Data System (ADS)

    Richardson, Chris T.; Kannappan, Sheila; Moffett, Amanda J.; RESOLVE survey team

    2018-06-01

    Metal poor star forming galaxies sit on the far left wing of the BPT diagram just below traditional demarcation lines. The basic approach to reproducing their emission lines by coupling photoionization models to stellar population synthesis models underestimates the observed [O III] / Hβ ratio by a factor 0.3-0.5 dex. We classified galaxies as metal poor in the REsolved Spectroscopy of a Local VolumE (RESOLVE) survey and the Environmental COntext (ECO) catalog by using the IZI code based off of Bayesian inference. We used a variety of stellar population synthesis codes to generate SEDs covering a range of starburst ages and metallicities including both secular and binary stellar evolution. Here, we show that multiple SPS codes can produce SEDs hard enough to reduce the offset assuming that simple, and perhaps unjustified, nebular conditions hold. Adopting more realistic nebular conditions shows that, despite the recent emphasis placed on binary evolution to fit high O III ratios, none of our SEDs can reduce the offset. We propose several new solutions including using ensembles of nebular clouds and improved microphysics to address this issue. This work is supported by National Science Foundation awards OCI-1053575, though XSEDE award TG-AST140040, and NSF awards AST-0955368 and CISE/ACI-1156614.

  7. Eclipsing Binaries From the CSTAR Project at Dome A, Antarctica

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Zhang, Hui; Wang, Songhu; Zhou, Ji-Lin; Zhou, Xu; Wang, Lingzhi; Wang, Lifan; Wittenmyer, R. A.; Liu, Hui-Gen; Meng, Zeyang; Ashley, M. C. B.; Storey, J. W. V.; Bayliss, D.; Tinney, Chris; Wang, Ying; Wu, Donghong; Liang, Ensi; Yu, Zhouyi; Fan, Zhou; Feng, Long-Long; Gong, Xuefei; Lawrence, J. S.; Liu, Qiang; Luong-Van, D. M.; Ma, Jun; Wu, Zhenyu; Yan, Jun; Yang, Huigen; Yang, Ji; Yuan, Xiangyan; Zhang, Tianmeng; Zhu, Zhenxi; Zou, Hu

    2015-04-01

    The Chinese Small Telescope ARray (CSTAR) has observed an area around the Celestial South Pole at Dome A since 2008. About 20,000 light curves in the i band were obtained during the observation season lasting from 2008 March to July. The photometric precision achieves about 4 mmag at i = 7.5 and 20 mmag at i = 12 within a 30 s exposure time. These light curves are analyzed using Lomb-Scargle, Phase Dispersion Minimization, and Box Least Squares methods to search for periodic signals. False positives may appear as a variable signature caused by contaminating stars and the observation mode of CSTAR. Therefore, the period and position of each variable candidate are checked to eliminate false positives. Eclipsing binaries are removed by visual inspection, frequency spectrum analysis, and a locally linear embedding technique. We identify 53 eclipsing binaries in the field of view of CSTAR, containing 24 detached binaries, 8 semi-detached binaries, 18 contact binaries, and 3 ellipsoidal variables. To derive the parameters of these binaries, we use the Eclipsing Binaries via Artificial Intelligence method. The primary and secondary eclipse timing variations (ETVs) for semi-detached and contact systems are analyzed. Correlated primary and secondary ETVs confirmed by false alarm tests may indicate an unseen perturbing companion. Through ETV analysis, we identify two triple systems (CSTAR J084612.64-883342.9 and CSTAR J220502.55-895206.7). The orbital parameters of the third body in CSTAR J220502.55-895206.7 are derived using a simple dynamical model.

  8. Dynamics of binary-disk interaction. 1: Resonances and disk gap sizes

    NASA Technical Reports Server (NTRS)

    Artymowicz, Pawel; Lubow, Stephen H.

    1994-01-01

    We investigate the gravitational interaction of a generally eccentric binary star system with circumbinary and circumstellar gaseous disks. The disks are assumed to be coplanar with the binary, geometrically thin, and primarily governed by gas pressure and (turbulent) viscosity but not self-gravity. Both ordinary and eccentric Lindblad resonances are primarily responsible for truncating the disks in binaries with arbitrary eccentricity and nonextreme mass ratio. Starting from a smooth disk configuration, after the gravitational field of the binary truncates the disk on the dynamical timescale, a quasi-equilibrium is achieved, in which the resonant and viscous torques balance each other and any changes in the structure of the disk (e.g., due to global viscous evolution) occur slowly, preserving the average size of the gap. We analytically compute the approximate sizes of disks (or disk gaps) as a function of binary mass ratio and eccentricity in this quasi-equilibrium. Comparing the gap sizes with results of direct simulations using the smoothed particle hydrodynamics (SPH), we obtain a good agreement. As a by-product of the computations, we verify that standard SPH codes can adequately represent the dynamics of disks with moderate viscosity, Reynolds number R approximately 10(exp 3). For typical viscous disk parameters, and with a denoting the binary semimajor axis, the inner edge location of a circumbinary disk varies from 1.8a to 2.6a with binary eccentricity increasing from 0 to 0.25. For eccentricities 0 less than e less than 0.75, the minimum separation between a component star and the circumbinary disk inner edge is greater than a. Our calculations are relevant, among others, to protobinary stars and the recently discovered T Tau pre-main-sequence binaries. We briefly examine the case of a pre-main-sequence spectroscopic binary GW Ori and conclude that circumbinary disk truncation to the size required by one proposed spectroscopic model cannot be due to Linblad resonances, even if the disk is nonviscous.

  9. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values.

    PubMed

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-05

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  11. A Sequence of Sorting Strategies.

    ERIC Educational Resources Information Center

    Duncan, David R.; Litwiller, Bonnie H.

    1984-01-01

    Describes eight increasingly sophisticated and efficient sorting algorithms including linear insertion, binary insertion, shellsort, bubble exchange, shakersort, quick sort, straight selection, and tree selection. Provides challenges for the reader and the student to program these efficiently. (JM)

  12. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  13. Object tracking on mobile devices using binary descriptors

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  14. Determination of Orbital Parameters for Visual Binary Stars Using a Fourier-Series Approach

    NASA Astrophysics Data System (ADS)

    Brown, D. E.; Prager, J. R.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2001-12-01

    We expand on the Fourier transform method of Monet (ApJ 234, 275, 1979) to infer the orbital parameters of visual binary stars, and we present results for several systems, both simulated and real. Although originally developed to address binary systems observed through at least one complete period, we have extended the method to deal explicitly with cases where the orbital data is less complete. This is especially useful in cases where the period is so long that only a fragment of the orbit has been recorded. We utilize Fourier-series fitting methods appropriate to data sets covering less than one period and containing random measurement errors. In so doing, we address issues of over-determination in fitting the data and the reduction of other deleterious Fourier-series artifacts. We developed our algorithm using the MAPLE mathematical software code, and tested it on numerous "synthetic" systems, and several real binaries, including Xi Boo, 24 Aqr, and Bu 738. This work was supported at Lehigh University by the Delaware Valley Space Grant Consortium and by NSF-REU grant PHY-9820301.

  15. Irradiation-driven Mass Transfer Cycles in Compact Binaries

    NASA Astrophysics Data System (ADS)

    Büning, A.; Ritter, H.

    2005-08-01

    We elaborate on the analytical model of Ritter, Zhang, & Kolb (2000) which describes the basic physics of irradiation-driven mass transfer cycles in semi-detached compact binary systems. In particular, we take into account a contribution to the thermal relaxation of the donor star which is unrelated to irradiation and which was neglected in previous studies. We present results of simulations of the evolution of compact binaries undergoing mass transfer cycles, in particular also of systems with a nuclear evolved donor star. These computations have been carried out with a stellar evolution code which computes mass transfer implicitly and models irradiation of the donor star in a point source approximation, thereby allowing for much more realistic simulations than were hitherto possible. We find that low-mass X-ray binaries (LMXBs) and cataclysmic variables (CVs) with orbital periods ⪉ 6hr can undergo mass transfer cycles only for low angular momentum loss rates. CVs containing a giant donor or one near the terminal age main sequence are more stable than previously thought, but can possibly also undergo mass transfer cycles.

  16. Practical low-cost visual communication using binary images for deaf sign language.

    PubMed

    Manoranjan, M D; Robinson, J A

    2000-03-01

    Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.

  17. First Photometric Investigation of the Neglected EW-type Binary System V502 Her

    NASA Astrophysics Data System (ADS)

    Zhao, Ergang; Qian, Shengbang; Liao, Wenping; He, Jiajia; Shi, Xiangdong; Zhang, Jia

    2018-04-01

    V502 Her is a neglected EW-type binary, which has been known for more than 60 years. The first multi-color CCD photometric light curve and spectroscopic observations of contact binary V502 Her was obtained. Based on the LAMOST data, its spectrum can be found to be F5. Together with solutions of light curves by using the Wilson-Devinney code, it infers that V502 Her is an A-type W UMa contact binary system with the mass ratio of q = 0.313 and the filling factor of f = 38.1%. According to all minimum times from the literature and our observations, the orbital period was analyzed and a long-term increase with a periodic change (P 3 = 26.8 years) was computed. The orbital period increase may be caused by the mass transfer from a less-massive component to the more massive one, which indicates that V502 Her is in the thermal relaxation oscillation (TRO) controller stage, while the light-travel time effect (LTTE) through the presence of a cool third body may lead to the periodic variation.

  18. Binary Black Holes, Gravitational Waves, and Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2007-01-01

    Massive black hole (MBH) binaries are found at the centers of most galaxies. MBH mergers trace galaxy mergers and are strong sources of gravitational waves. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities. causing them to crash well before the black hole:, in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This presentation shows how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. Focus is on the recent advances that that reveal these waveforms, and the potential for discoveries that arises when these sources are observed by LIGO and LISA.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L.; Qian, S.-B.; Liao, W.-P.

    This paper analyzes the first obtained four-color light curves of V396 Mon using the 2003 version of the W-D code. It is confirmed that V396 Mon is a shallow W-type contact binary system with a mass ratio q = 2.554({+-}0.004) and a degree of contact factor f = 18.9%({+-}1.2%). A period investigation based on all available data shows that the period of the system includes a long-term decrease (dP/dt = -8.57 x 10{sup -8} days yr{sup -1}) and an oscillation (A{sub 3} = 0.0160 day, T{sub 3} = 42.4 yr). They are caused by angular momentum loss and light-time effect,more » respectively. The suspect third body is possibly a small M-type star (about 0.31 solar mass). Though some observations indicate that this system has strong magnetic activity, by our analysis we found that the Applegate mechanism cannot explain the periodic changes. This binary is an especially important system according to Qian's statistics of contact binaries as its mass ratio lies near the proposed pivot point about which the physical structure of contact binaries supposedly oscillates.« less

  20. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  1. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  2. The GS (genetic selection) Principle.

    PubMed

    Abel, David L

    2009-01-01

    The GS (Genetic Selection) Principle states that biological selection must occur at the nucleotide-sequencing molecular-genetic level of 3'5' phosphodiester bond formation. After-the-fact differential survival and reproduction of already-living phenotypic organisms (ordinary natural selection) does not explain polynucleotide prescription and coding. All life depends upon literal genetic algorithms. Even epigenetic and "genomic" factors such as regulation by DNA methylation, histone proteins and microRNAs are ultimately instructed by prior linear digital programming. Biological control requires selection of particular configurable switch-settings to achieve potential function. This occurs largely at the level of nucleotide selection, prior to the realization of any integrated biofunction. Each selection of a nucleotide corresponds to the setting of two formal binary logic gates. The setting of these switches only later determines folding and binding function through minimum-free-energy sinks. These sinks are determined by the primary structure of both the protein itself and the independently prescribed sequencing of chaperones. The GS Principle distinguishes selection of existing function (natural selection) from selection for potential function (formal selection at decision nodes, logic gates and configurable switch-settings).

  3. Numerical simulation of advective-dispersive multisolute transport with sorption, ion exchange and equilibrium chemistry

    USGS Publications Warehouse

    Lewis, F.M.; Voss, C.I.; Rubin, Jacob

    1986-01-01

    A model was developed that can simulate the effect of certain chemical and sorption reactions simultaneously among solutes involved in advective-dispersive transport through porous media. The model is based on a methodology that utilizes physical-chemical relationships in the development of the basic solute mass-balance equations; however, the form of these equations allows their solution to be obtained by methods that do not depend on the chemical processes. The chemical environment is governed by the condition of local chemical equilibrium, and may be defined either by the linear sorption of a single species and two soluble complexation reactions which also involve that species, or binary ion exchange and one complexation reaction involving a common ion. Partial differential equations that describe solute mass balance entirely in the liquid phase are developed for each tenad (a chemical entity whose total mass is independent of the reaction process) in terms of their total dissolved concentration. These equations are solved numerically in two dimensions through the modification of an existing groundwater flow/transport computer code. (Author 's abstract)

  4. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  5. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  6. Binary Black Hole Mergers from Field Triples: Properties, Rates, and the Impact of Stellar Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonini, Fabio; Toonen, Silvia; Hamers, Adrian S.

    We consider the formation of binary black hole (BH) mergers through the evolution of field massive triple stars. In this scenario, favorable conditions for the inspiral of a BH binary are initiated by its gravitational interaction with a distant companion, rather than by a common-envelope phase invoked in standard binary evolution models. We use a code that follows self-consistently the evolution of massive triple stars, combining the secular triple dynamics (Lidov–Kozai cycles) with stellar evolution. After a BH triple is formed, its dynamical evolution is computed using either the orbit-averaged equations of motion, or a high-precision direct integrator for triplesmore » with weaker hierarchies for which the secular perturbation theory breaks down. Most BH mergers in our models are produced in the latter non-secular dynamical regime. We derive the properties of the merging binaries and compute a BH merger rate in the range (0.3–1.3) Gpc{sup −3} yr{sup −1}, or up to ≈2.5 Gpc{sup −3} yr{sup −1} if the BH orbital planes have initially random orientation. Finally, we show that BH mergers from the triple channel have significantly higher eccentricities than those formed through the evolution of massive binaries or in dense star clusters. Measured eccentricities could therefore be used to uniquely identify binary mergers formed through the evolution of triple stars. While our results suggest up to ≈10 detections per year with Advanced-LIGO, the high eccentricities could render the merging binaries harder to detect with planned space based interferometers such as LISA.« less

  7. Unwinding the amplituhedron in binary

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Thomas, Hugh; Trnka, Jaroslav

    2018-01-01

    We present new, fundamentally combinatorial and topological characterizations of the amplituhedron. Upon projecting external data through the amplituhedron, the resulting configuration of points has a specified (and maximal) generalized "winding number". Equivalently, the amplituhedron can be fully described in binary: canonical projections of the geometry down to one dimension have a specified (and maximal) number of "sign flips" of the projected data. The locality and unitarity of scattering amplitudes are easily derived as elementary consequences of this binary code. Minimal winding defines a natural "dual" of the amplituhedron. This picture gives us an avatar of the amplituhedron purely in the configuration space of points in vector space (momentum-twistor space in the physics), a new interpretation of the canonical amplituhedron form, and a direct bosonic understanding of the scattering super-amplitude in planar N = 4 SYM as a differential form on the space of physical kinematical data.

  8. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  9. Kottamia 74-inch telescope discovery of the new eclipsing binary KAO-EGYPT J225702.44+523222.1.: First CCD photometry and light curve analysis

    NASA Astrophysics Data System (ADS)

    Shokry, A.; Darwish, M. S.; Saad, S. M.; Eldepsy, M.; Zead, I.

    2017-08-01

    We present the first multicolor CCD photometry for the newly discovered binary system KAO-EGYPT J225702.44+523222.1. New times of light minimum and new ephemeris were obtained. The VR I light curves were analyzed using WD code, the difference in maximum light at phase 0.25 is modeled with a cool spot on the secondary component. The solution show that the system is A-subtype, overcontact binary with fill-out factor = 42% and low mass ratio, q = 0.275. The two components of spectral types K0 and K1 and the primary component is the massive one. The position of both components on the M-L and M-R relations revealed that the primary component is a main sequence star while the secondary is an evolved component.

  10. Agrobacterium-mediated transformation of Easter lily (Lilium longiflorum cv. Nellie White)

    USDA-ARS?s Scientific Manuscript database

    Conditions were optimized for transient transformation of Lilium longiflorum cv. Nellie White using Agrobacterium tumefaciens. Bulb scale and basal meristem explants were inoculated with A. tumefaciens strain AGL1 containing the binary vector pCAMBIA 2301 which has the uidA gene that codes for ß-gl...

  11. Optical triple-in digital logic using nonlinear optical four-wave mixing

    NASA Astrophysics Data System (ADS)

    Widjaja, Joewono; Tomita, Yasuo

    1995-08-01

    A new programmable optical processor is proposed for implementing triple-in combinatorial digital logic that uses four-wave mixing. Binary-coded decimal-to-octal decoding is experimentally demonstrated by use of a photorefractive BaTiO 3 crystal. The result confirms the feasibility of the proposed system.

  12. Period variations of Algol-type eclipsing binaries AD And, TWCas and IV Cas

    NASA Astrophysics Data System (ADS)

    Parimucha, Štefan; Gajdoš, Pavol; Kudak, Viktor; Fedurco, Miroslav; Vaňko, Martin

    2018-04-01

    We present new analyses of variations in O – C diagrams of three Algol-type eclipsing binary stars: AD And, TW Cas and IV Cas. We have used all published minima times (including visual and photographic) as well as newly determined ones from our and SuperWasp observations. We determined orbital parameters of 3rd bodies in the systems with statistically significant errors, using our code based on genetic algorithms and Markov chain Monte Carlo simulations. We confirmed the multiple nature of AD And and the triple-star model of TW Cas, and we proposed a quadruple-star model of IV Cas.

  13. VizieR Online Data Catalog: Disk of {upsilon} Sgr (Netolicky+, 2009)

    NASA Astrophysics Data System (ADS)

    Netolicky, M.; Bonneau, D.; Chesneau, O.; Harmanec, P.; Koubsky, P.; Mourard, D.; Stee, P.

    2010-01-01

    The aim of this paper is to determine the properties of the dusty environment of the hydrogen-deficient binary system upsilon Sgr, whose binary properties and other characteristics are poorly known. We obtained the first mid-IR interferometric observations of upsilon Sgr using the instrument MIDI of the VLTI used with different pairs of 1.8m and 8m telescopes. The calibrated visibilities, the N band spectrum, and the SED were compared with disk models computed with the MC3D code to determine the geometry and chemical composition of the envelope. ************************************************************************** * * * Sorry, but the author(s) never supplied the tabular material * * announced in the paper * * * **************************************************************************

  14. Assessing Potential Energy Cost Savings from Increased Energy Code Compliance in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.

    The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less

  15. Method for coding low entrophy data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1995-01-01

    A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.

  16. Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2000-01-01

    The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.

  17. MESA models of the evolutionary state of the interacting binary epsilon Aurigae

    NASA Astrophysics Data System (ADS)

    Gibson, Justus L.; Stencel, Robert E.

    2018-06-01

    Using MESA code (Modules for Experiments in Stellar Astrophysics, version 9575), an evaluation was made of the evolutionary state of the epsilon Aurigae binary system (HD 31964, F0Iap + disc). We sought to satisfy several observational constraints: (1) requiring evolutionary tracks to pass close to the current temperature and luminosity of the primary star; (2) obtaining a period near the observed value of 27.1 years; (3) matching a mass function of 3.0; (4) concurrent Roche lobe overflow and mass transfer; (5) an isotopic ratio 12C/13C = 5 and, (6) matching the interferometrically determined angular diameter. A MESA model starting with binary masses of 9.85 + 4.5 M⊙, with a 100 d initial period, produces a 1.2 + 10.6 M⊙ result having a 547 d period, and a single digit 12C/13C ratio. These values were reached near an age of 20 Myr, when the donor star comes close to the observed luminosity and temperature for epsilon Aurigae A, as a post-RGB/pre-AGB star. Contemporaneously, the accretor then appears as an upper main-sequence, early B-type star. This benchmark model can provide a basis for further exploration of this interacting binary, and other long-period binary stars.

  18. Robust image region descriptor using local derivative ordinal binary pattern

    NASA Astrophysics Data System (ADS)

    Shang, Jun; Chen, Chuanbo; Pei, Xiaobing; Liang, Hu; Tang, He; Sarem, Mudar

    2015-05-01

    Binary image descriptors have received a lot of attention in recent years, since they provide numerous advantages, such as low memory footprint and efficient matching strategy. However, they utilize intermediate representations and are generally less discriminative than floating-point descriptors. We propose an image region descriptor, namely local derivative ordinal binary pattern, for object recognition and image categorization. In order to preserve more local contrast and edge information, we quantize the intensity differences between the central pixels and their neighbors of the detected local affine covariant regions in an adaptive way. These differences are then sorted and mapped into binary codes and histogrammed with a weight of the sum of the absolute value of the differences. Furthermore, the gray level of the central pixel is quantized to further improve the discriminative ability. Finally, we combine them to form a joint histogram to represent the features of the image. We observe that our descriptor preserves more local brightness and edge information than traditional binary descriptors. Also, our descriptor is robust to rotation, illumination variations, and other geometric transformations. We conduct extensive experiments on the standard ETHZ and Kentucky datasets for object recognition and PASCAL for image classification. The experimental results show that our descriptor outperforms existing state-of-the-art methods.

  19. Application of a new non-linear least squares velocity curve analysis technique for spectroscopic binary stars

    NASA Astrophysics Data System (ADS)

    Karami, K.; Mohebi, R.; Soltanzadeh, M. M.

    2008-11-01

    Using measured radial velocity data of nine double lined spectroscopic binary systems NSV 223, AB And, V2082 Cyg, HS Her, V918 Her, BV Dra, BW Dra, V2357 Oph, and YZ Cas, we find corresponding orbital and spectroscopic elements via the method introduced by Karami and Mohebi (Chin. J. Astron. Astrophys. 7:558, 2007a) and Karami and Teimoorinia (Astrophys. Space Sci. 311:435, 2007). Our numerical results are in good agreement with those obtained by others using more traditional methods.

  20. Plunge waveforms from inspiralling binary black holes.

    PubMed

    Baker, J; Brügmann, B; Campanelli, M; Lousto, C O; Takahashi, R

    2001-09-17

    We study the coalescence of nonspinning binary black holes from near the innermost stable circular orbit down to the final single rotating black hole. We use a technique that combines the full numerical approach to solve the Einstein equations, applied in the truly nonlinear regime, and linearized perturbation theory around the final distorted single black hole at later times. We compute the plunge waveforms, which present a non-negligible signal lasting for t approximately 100M showing early nonlinear ringing, and we obtain estimates for the total gravitational energy and angular momentum radiated.

Top