NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Binary translation using peephole translation rules
Bansal, Sorav; Aiken, Alex
2010-05-04
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
Interactive Exploration for Continuously Expanding Neuron Databases.
Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting
2017-02-15
This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Schroedinger’s code: Source code availability and transparency in astrophysics
NASA Astrophysics Data System (ADS)
Ryan, PW; Allen, Alice; Teuben, Peter
2018-01-01
Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Black Hole Accretion Discs on a Moving Mesh
NASA Astrophysics Data System (ADS)
Ryan, Geoffrey
2017-01-01
We present multi-dimensional numerical simulations of black hole accretion disks relevant for the production of electromagnetic counterparts to gravitational wave sources. We perform these simulations with a new general relativistic version of the moving-mesh magnetohydrodynamics code DISCO which we will present. This open-source code, GR-DISCO uses an orbiting and shearing mesh which moves with the dominant flow velocity, greatly improving the numerical accuracy of the thermodynamic variables in supersonic flows while also reducing numerical viscosity and greatly increasing computational efficiency by allowing for a larger time step. We have used GR-DISCO to study black hole accretion discs subject to gravitational torques from a binary companion, relevant for both current and future supermassive binary black hole searches and also as a possible electromagnetic precursor mechanism for LIGO events. Binary torques in these discs excite spiral shockwaves which effectively transport angular momentum in the disc and propagate through the innermost stable orbit, leading to stress corresponding to an alpha-viscosity of 10-2. We also present three-dimensional GRMHD simulations of neutrino dominated accretion flows (NDAFs) occurring after a binary neutron star merger in order to elucidate the conditions for electromagnetic transient production accompanying these gravitational waves sources expected to be detected by LIGO in the near future.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley
2018-05-01
We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
Binary Code Extraction and Interface Identification for Security Applications
2009-10-02
the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
Digital Controller For Emergency Beacon
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1990-01-01
Prototype digital controller intended for use in 406-MHz emergency beacon. Undergoing development according to international specifications, 406-MHz emergency beacon system includes satellites providing worldwide monitoring of beacons, with Doppler tracking to locate each beacon within 5 km. Controller turns beacon on and off and generates binary codes identifying source (e.g., ship, aircraft, person, or vehicle on land). Codes transmitted by phase modulation. Knowing code, monitor attempts to communicate with user, monitor uses code information to dispatch rescue team appropriate to type and locations of carrier.
Simulations of binary black hole mergers
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2017-01-01
Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
The Need for Vendor Source Code at NAS. Revised
NASA Technical Reports Server (NTRS)
Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
Massive black hole (MBH) binaries are found at the centers of most galaxies. MBH mergers trace galaxy mergers and are strong sources of gravitational waves. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities. causing them to crash well before the black hole:, in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This presentation shows how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. Focus is on the recent advances that that reveal these waveforms, and the potential for discoveries that arises when these sources are observed by LIGO and LISA.
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.
Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong
2017-05-01
Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Learning Compact Binary Face Descriptor for Face Recognition.
Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie
2015-10-01
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
Method for coding low entrophy data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1995-01-01
A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.
NASA Astrophysics Data System (ADS)
Valsecchi, Francesca
Binary star systems hosting black holes, neutron stars, and white dwarfs are unique laboratories for investigating both extreme physical conditions, and stellar and binary evolution. Black holes and neutron stars are observed in X-ray binaries, where mass accretion from a stellar companion renders them X-ray bright. Although instruments like Chandra have revolutionized the field of X-ray binaries, our theoretical understanding of their origin and formation lags behind. Progress can be made by unravelling the evolutionary history of observed systems. As part of my thesis work, I have developed an analysis method that uses detailed stellar models and all the observational constraints of a system to reconstruct its evolutionary path. This analysis models the orbital evolution from compact-object formation to the present time, the binary orbital dynamics due to explosive mass loss and a possible kick at core collapse, and the evolution from the progenitor's Zero Age Main Sequence to compact-object formation. This method led to a theoretical model for M33 X-7, one of the most massive X-ray binaries known and originally marked as an evolutionary challenge. Compact objects are also expected gravitational wave (GW) sources. In particular, double white dwarfs are both guaranteed GW sources and observed electromagnetically. Although known systems show evidence of tidal deformation and a successful GW astronomy requires realistic models of the sources, detached double white dwarfs are generally approximated to point masses. For the first time, I used realistic models to study tidally-driven periastron precession in eccentric binaries. I demonstrated that its imprint on the GW signal yields constrains on the components' masses and that the source would be misclassified if tides are neglected. Beyond this adiabatic precession, tidal dissipation creates a sink of orbital angular momentum. Its efficiency is strongest when tides are dynamic and excite the components' free oscillation modes. Accounting for this effect will determine whether our interpretation of current and future observations will constrain the sources' true physical properties. To investigate dynamic tides I have developed CAFein, a novel code that calculates forced non-adiabatic stellar oscillations using a highly stable and efficient numerical method.
Simulated single molecule microscopy with SMeagol.
Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan
2016-08-01
SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep
2017-05-01
Binary to octal and octal to binary code converter is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using external gates. In this paper, binary to octal and octal to binary code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Mal-Xtract: Hidden Code Extraction using Memory Analysis
NASA Astrophysics Data System (ADS)
Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah
2017-01-01
Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.
Multi-Messenger Astronomy: White Dwarf Binaries, LISA and GAIA
NASA Astrophysics Data System (ADS)
Bueno, Michael; Breivik, Katelyn; Larson, Shane L.
2017-01-01
The discovery of gravitational waves has ushered in a new era in astronomy. The low-frequency band covered by the future LISA detector provides unprecedented opportunities for multi-messenger astronomy. With the Global Astrometric Interferometer for Astrophysics (GAIA) mission, we expect to discover about 1,000 eclipsing binary systems composed of a WD and a main sequence star - a sizeable increase from the approximately 34 currently known binaries of this type. In advance of the first GAIA data release and the launch of LISA within the next decade, we used the Binary Stellar Evolution (BSE) code simulate the evolution of White Dwarf Binaries (WDB) in a fixed galaxy population of about 196,000 sources. Our goal is to assess the detectability of a WDB by LISA and GAIA using the parameters from our population synthesis, we calculate GW strength h, and apparent GAIA magnitude G. We can then use a scale factor to make a prediction of how many multi- messenger sources we expect to be detectable by both LISA and GAIA in a galaxy the size of the Milky Way. We create binaries 10 times to ensure randomness in distance assignment and average our results. We then determined whether or not astronomical chirp is the difference between the total chirp and the GW chirp. With Astronomical chirp and simulations of mass transfer and tides, we can gather more information about the internal astrophysics of stars in ultra-compact binary systems.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, John
2007-01-01
The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GE0600, as well as the space-based interferometer LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, data analysis, and astrophysics.
Binary Black Holes: Mergers, Dynamics, and Waveforms
NASA Astrophysics Data System (ADS)
Centrella, Joan
2007-04-01
The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GEO600, as well as the space-based interferometer LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, data analysis, and astrophysics.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information
ERIC Educational Resources Information Center
McCallister, Gary
2005-01-01
The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeGendre, M.
2012-04-01
We are seeking a code review of patches against DyninstAPI 8.0. DyninstAPI is an open source binary instrumentation library from the University of Wisconsin and University of Maryland. Our patches port DyninstAPI to the BlueGene/P and BlueGene/Q systems, as well as fix DyninstAPI bugs and implement minor new features in DyninstAPI.
Wolf-Rayet stars, black holes and the first detected gravitational wave source
NASA Astrophysics Data System (ADS)
Bogomazov, A. I.; Cherepashchuk, A. M.; Lipunov, V. M.; Tutukov, A. V.
2018-01-01
The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes (Masevich and Tutukov, 1988). To study the evolution of the GW150914 predecessor we use the ;Scenario Machine; code presented by Lipunov et al. (1996). The scenario modeling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be 100÷140M⊙ and the initial separation of the components can be 50÷350R⊙. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modeling based on it.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2008-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields. We need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2008-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities. causing them to crash well before the black hole:, in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2008-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2009-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Recently this situation has changed dramatically, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simutation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.
Binary Black Holes, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Centrella, Joan
2006-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. This situation has changed dramatically in the past year, with a series of amazing breakthroughs. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LISA and LIGO.
NASA Astrophysics Data System (ADS)
Gao, Jian; Wang, Yongkang
2018-01-01
Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.
RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations
NASA Astrophysics Data System (ADS)
Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy
RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
AirShow 1.0 CFD Software Users' Guide
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.
MULTI-CHANNEL ELECTRIC PULSE HEIGHT ANALYZER
Gallagher, J.D. et al.
1960-11-22
An apparatus is given for converting binary information into coded decimal form comprising means, in combination with a binary adder, a live memory and a source of bigit pulses, for synchronizing the bigit pulses and the adder output pulses; a source of digit pulses synchronized with every fourth bigit pulse; means for generating a conversion pulse in response to the time coincidence of the adder output pulse and a digit pulse: means having a delay equal to two bigit pulse periods coupling the adder output with the memory; means for promptly impressing said conversion pulse on the input of said memory: and means having a delay equal to one bigit pulse period for again impressing the conversion pulse on the input of the memory whereby a fourth bigit adder pulse results in the insertion into the memory of second, third and fourth bigits.
TANDEM: matching proteins with tandem mass spectra.
Craig, Robertson; Beavis, Ronald C
2004-06-12
Tandem mass spectra obtained from fragmenting peptide ions contain some peptide sequence specific information, but often there is not enough information to sequence the original peptide completely. Several proprietary software applications have been developed to attempt to match the spectra with a list of protein sequences that may contain the sequence of the peptide. The application TANDEM was written to provide the proteomics research community with a set of components that can be used to test new methods and algorithms for performing this type of sequence-to-data matching. The source code and binaries for this software are available at http://www.proteome.ca/opensource.html, for Windows, Linux and Macintosh OSX. The source code is made available under the Artistic License, from the authors.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
orthAgogue: an agile tool for the rapid prediction of orthology relations.
Ekseth, Ole Kristian; Kuiper, Martin; Mironov, Vladimir
2014-03-01
The comparison of genes and gene products across species depends on high-quality tools to determine the relationships between gene or protein sequences from various species. Although some excellent applications are available and widely used, their performance leaves room for improvement. We developed orthAgogue: a multithreaded C application for high-speed estimation of homology relations in massive datasets, operated via a flexible and easy command-line interface. The orthAgogue software is distributed under the GNU license. The source code and binaries compiled for Linux are available at https://code.google.com/p/orthagogue/.
Digital plus analog output encoder
NASA Technical Reports Server (NTRS)
Hafle, R. S. (Inventor)
1976-01-01
The disclosed encoder is adapted to produce both digital and analog output signals corresponding to the angular position of a rotary shaft, or the position of any other movable member. The digital signals comprise a series of binary signals constituting a multidigit code word which defines the angular position of the shaft with a degree of resolution which depends upon the number of digits in the code word. The basic binary signals are produced by photocells actuated by a series of binary tracks on a code disc or member. The analog signals are in the form of a series of ramp signals which are related in length to the least significant bit of the digital code word. The analog signals are derived from sine and cosine tracks on the code disc.
PopCORN: Hunting down the differences between binary population synthesis codes
NASA Astrophysics Data System (ADS)
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org
On the existence of binary simplex codes. [using combinatorial construction
NASA Technical Reports Server (NTRS)
Taylor, H.
1977-01-01
Using a simple combinatorial construction, the existence of a binary simplex code with m codewords for all m is greater than or equal to 1 is proved. The problem of the shortest possible length is left open.
Robust GRMHD Evolutions of Merging Black-Hole Binaries in Magnetized Plasma
NASA Astrophysics Data System (ADS)
Kelly, Bernard; Etienne, Zachariah; Giacomazzo, Bruno; Baker, John
2016-03-01
Black-hole binary (BHB) mergers are expected to be powerful sources of gravitational radiation at stellar and galactic scales. A typical astrophysical environment for these mergers will involve magnetized plasmas accreting onto each hole; the strong-field gravitational dynamics of the merger may churn this plasma in ways that produce characteristic electromagnetic radiation visible to high-energy EM detectors on and above the Earth. Here we return to a cutting-edge GRMHD simulation of equal-mass BHBs in a uniform plasma, originally performed with the Whisky code. Our new tool is the recently released IllinoisGRMHD, a compact, highly-optimized ideal GRMHD code that meshes with the Einstein Toolkit. We establish consistency of IllinoisGRMHD results with the older Whisky results, and investigate the robustness of these results to changes in initial configuration of the BHB and the plasma magnetic field, and discuss the interpretation of the ``jet-like'' features seen in the Poynting flux post-merger. Work supported in part by NASA Grant 13-ATP13-0077.
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
POPCORN: A comparison of binary population synthesis codes
NASA Astrophysics Data System (ADS)
Claeys, J. S. W.; Toonen, S.; Mennekens, N.
2013-01-01
We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
Crowd-Sourced Help with Emergent Knowledge for Optimized Formal Verification (CHEKOFV)
2016-03-01
up game Binary Fission, which was deployed during Phase Two of CHEKOFV. Xylem: The Code of Plants is a casual game for players using mobile ...there are the design and engineering challenges of building a game infrastructure that integrates verification technology with crowd participation...the backend processes that annotate the originating software. Allowing players to construct their own equations opened up the flexibility to receive
NASA Astrophysics Data System (ADS)
Kumar, Santosh
2017-07-01
Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Optimal periodic binary codes of lengths 28 to 64
NASA Technical Reports Server (NTRS)
Tyler, S.; Keston, R.
1980-01-01
Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.
INSPECTION MEANS FOR INDUCTION MOTORS
Williams, A.W.
1959-03-10
an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.
FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.
Zou, Ding; Djordjevic, Ivan B
2015-06-01
In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.
Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent
2013-12-01
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
Irradiation-driven Mass Transfer Cycles in Compact Binaries
NASA Astrophysics Data System (ADS)
Büning, A.; Ritter, H.
2005-08-01
We elaborate on the analytical model of Ritter, Zhang, & Kolb (2000) which describes the basic physics of irradiation-driven mass transfer cycles in semi-detached compact binary systems. In particular, we take into account a contribution to the thermal relaxation of the donor star which is unrelated to irradiation and which was neglected in previous studies. We present results of simulations of the evolution of compact binaries undergoing mass transfer cycles, in particular also of systems with a nuclear evolved donor star. These computations have been carried out with a stellar evolution code which computes mass transfer implicitly and models irradiation of the donor star in a point source approximation, thereby allowing for much more realistic simulations than were hitherto possible. We find that low-mass X-ray binaries (LMXBs) and cataclysmic variables (CVs) with orbital periods ⪉ 6hr can undergo mass transfer cycles only for low angular momentum loss rates. CVs containing a giant donor or one near the terminal age main sequence are more stable than previously thought, but can possibly also undergo mass transfer cycles.
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
NASA Astrophysics Data System (ADS)
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
Binary black holes, gravitational waves, and numerical relativity
NASA Astrophysics Data System (ADS)
Centrella, Joan M.; Baker, John G.; Boggs, William D.; Kelly, Bernard J.; McWilliams, Sean T.; van Meter, James R.
2007-07-01
The final merger of comparable mass binary black holes produces an intense burst of gravitational radiation and is one of the strongest sources for both ground-based and space-based gravitational wave detectors. Since the merger occurs in the strong-field dynamical regime of general relativity, numerical relativity simulations of the full Einstein equations in 3-D are required to calculate the resulting gravitational dynamics and waveforms. While this problem has been pursued for more than 30 years, the numerical codes have long been plagued by various instabilities and, overall, progress was incremental. Recently, however, dramatic breakthrough have occurred, resulting in robust simulations of merging black holes. In this paper, we examine these developments and the exciting new results that are emerging.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
Cognitive Code-Division Channelization
2011-04-01
22] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans...Commun., vol. 51, pp. 48-51, Jan. 2003. [23] C. Ding, M. Golin, and T. Klve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary...receiver pair coexisting with a primary code-division multiple-access ( CDMA ) system. Our objective is to find the optimum transmitting power and code
ProteoCloud: a full-featured open source proteomics cloud computing pipeline.
Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart
2013-08-02
We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.
DeNovoGUI: An Open Source Graphical User Interface for de Novo Sequencing of Tandem Mass Spectra
2013-01-01
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com. PMID:24295440
DeNovoGUI: an open source graphical user interface for de novo sequencing of tandem mass spectra.
Muth, Thilo; Weilnböck, Lisa; Rapp, Erdmann; Huber, Christian G; Martens, Lennart; Vaudel, Marc; Barsnes, Harald
2014-02-07
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com .
Gravitational Waves from Black Hole Mergers
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GEO600, as well as the space-based interferometer LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, data analysis, and astrophysics.
Simulating Gravitational Wave Emission from Massive Black Hole Binaries
NASA Technical Reports Server (NTRS)
Centrella, Joan
2008-01-01
The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. In the past few years, this situation has changed dramatically, with a series of amazing breakthroughs. This talk will focus on the recent advances that are revealing these waveforms. highlighting their astrophysical consequences and the dramatic new potential for discovery that arises when merging black holes will be observed using gravitational waves.
Optical Variability Signatures from Massive Black Hole Binaries
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Frank, Koby Alexander; Lidz, Adam
2017-01-01
The hierarchical merging of dark matter halos and their associated galaxies should lead to a population of supermassive black hole binaries (MBHBs). We consider plausible optical variability signatures from MBHBs at sub-parsec separations and search for these using data from the Catalina Real-Time Transient Survey (CRTS). Specifically, we model the impact of relativistic Doppler beaming on the accretion disk emission from the less massive, secondary black hole. We explore whether this Doppler modulation may be separated from other sources of stochastic variability in the accretion flow around the MBHBs, which we describe as a damped random walk (DRW). In the simple case of a circular orbit, relativistic beaming leads to a series of broad peaks — located at multiples of the orbital frequency — in the fluctuation power spectrum. We extend our analysis to the case of elliptical orbits and discuss the effect of beaming on the flux power spectrum and auto-correlation function using simulations. We present a code to model an observed light curve as a stochastic DRW-type time series modulated by relativistic beaming and apply the code to CRTS data.
Improvements to the construction of binary black hole initial data
NASA Astrophysics Data System (ADS)
Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald P.; Boyle, Michael; Szilágyi, Béla
2015-12-01
Construction of binary black hole initial data is a prerequisite for numerical evolutions of binary black holes. This paper reports improvements to the binary black hole initial data solver in the spectral Einstein code, to allow robust construction of initial data for mass-ratio above 10:1, and for dimensionless black hole spins above 0.9, while improving efficiency for lower mass-ratios and spins. We implement a more flexible domain decomposition, adaptive mesh refinement and an updated method for choosing free parameters. We also introduce a new method to control and eliminate residual linear momentum in initial data for precessing systems, and demonstrate that it eliminates gravitational mode mixing during the evolution. Finally, the new code is applied to construct initial data for hyperbolic scattering and for binaries with very small separation.
Nonlinear, nonbinary cyclic group codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.
Binary Black Hole Mergers, Gravitational Waves, and LISA
NASA Technical Reports Server (NTRS)
Centrella, Joan; Baker, J.; Boggs, W.; Kelly, B.; McWilliams, S.; vanMeter, J.
2008-01-01
The final merger of comparable mass binary black holes is expected to be the strongest source of gravitational waves for LISA. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. We will present the results of new simulations of black hole mergers with unequal masses and spins, focusing on the gravitational waves emitted and the accompanying astrophysical "kicks." The magnitude of these kicks has bearing on the production and growth of supermassive black holes during the epoch of structure formation, and on the retention of black holes in stellar clusters.
Black-Hole Binaries, Gravitational Waves, and Numerical Relativity
NASA Technical Reports Server (NTRS)
Kelly, Bernard J.; Centrella, Joan; Baker, John G.; Kelly, Bernard J.; vanMeter, James R.
2010-01-01
Understanding the predictions of general relativity for the dynamical interactions of two black holes has been a long-standing unsolved problem in theoretical physics. Black-hole mergers are monumental astrophysical events ' releasing tremendous amounts of energy in the form of gravitational radiation ' and are key sources for both ground- and spacebased gravitational wave detectors. The black-hole merger dynamics and the resulting gravitational waveforms can only he calculated through numerical simulations of Einstein's equations of general relativity. For many years, numerical relativists attempting to model these mergers encountered a host of problems, causing their codes to crash after just a fraction of a binary orbit cnuld be simulated. Recently ' however, a series of dramatic advances in numerical relativity has ' for the first time, allowed stable / robust black hole merger simulations. We chronicle this remarkable progress in the rapidly maturing field of numerical relativity, and the new understanding of black-hole binary dynamics that is emerging. We also discuss important applications of these fundamental physics results to astrophysics, to gravitationalwave astronomy, and in other areas.
BHDD: Primordial black hole binaries code
NASA Astrophysics Data System (ADS)
Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco
2018-06-01
BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.
Understanding and Capturing People’s Mobile App Privacy Preferences
2013-10-28
The entire apps’ metadata takes up about 500MB of storage space when stored in a MySQL database and all the binary files take approximately 300GB of...functionality that can de- compile Dalvik bytecodes to Java source code faster than other de-compilers. Given the scale of the app analysis we planned on... java libraries, such as parser, sql connectors, etc Targeted Ads 137 admob, adwhirl, greystripe… Provided by mobile behavioral ads company to
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
ERIC Educational Resources Information Center
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
Constructing binary black hole initial data with high mass ratios and spins
NASA Astrophysics Data System (ADS)
Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald; Szilagyi, Bela; Simulating Extreme Spacetimes Collaboration
2015-04-01
Binary black hole systems have now been successfully modelled in full numerical relativity by many groups. In order to explore high-mass-ratio (larger than 1:10), high-spin systems (above 0.9 of the maximal BH spin), we revisit the initial-data problem for binary black holes. The initial-data solver in the Spectral Einstein Code (SpEC) was not able to solve for such initial data reliably and robustly. I will present recent improvements to this solver, among them adaptive mesh refinement and control of motion of the center of mass of the binary, and will discuss the much larger region of parameter space this code can now address.
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Golay sequences coded coherent optical OFDM for long-haul transmission
NASA Astrophysics Data System (ADS)
Qin, Cui; Ma, Xiangrong; Hua, Tao; Zhao, Jing; Yu, Huilong; Zhang, Jian
2017-09-01
We propose to use binary Golay sequences in coherent optical orthogonal frequency division multiplexing (CO-OFDM) to improve the long-haul transmission performance. The Golay sequences are generated by binary Reed-Muller codes, which have low peak-to-average power ratio and certain error correction capability. A low-complexity decoding algorithm for the Golay sequences is then proposed to recover the signal. Under same spectral efficiency, the QPSK modulated OFDM with binary Golay sequences coding with and without discrete Fourier transform (DFT) spreading (DFTS-QPSK-GOFDM and QPSK-GOFDM) are compared with the normal BPSK modulated OFDM with and without DFT spreading (DFTS-BPSK-OFDM and BPSK-OFDM) after long-haul transmission. At a 7% forward error correction code threshold (Q2 factor of 8.5 dB), it is shown that DFTS-QPSK-GOFDM outperforms DFTS-BPSK-OFDM by extending the transmission distance by 29% and 18%, in non-dispersion managed and dispersion managed links, respectively.
X1908+075: An X-Ray Binary with a 4.4 Day Period
NASA Astrophysics Data System (ADS)
Wen, Linqing; Remillard, Ronald A.; Bradt, Hale V.
2000-04-01
X1908+075 is an optically unidentified and highly absorbed X-ray source that appeared in early surveys such as Uhuru, OSO 7, Ariel 5, HEAO-1, and the EXOSAT Galactic Plane Survey. These surveys measured a source intensity in the range 2-12 mcrab at 2-10 keV, and the position was localized to ~0.5d. We use the Rossi X-Ray Timing Explorer (RXTE) All-Sky Monitor (ASM) to confirm our expectation that a particular Einstein/IPC detection (1E 1908.4+0730) provides the correct position for X1908+075. The analysis of the coded mask shadows from the ASM for the position of 1E 1908.4+0730 yields a persistent intensity ~8 mcrab (1.5-12 keV) over a 3 yr interval beginning in 1996 February. Furthermore, we detect a period of 4.400+/-0.001 days with a false-alarm probability less than 10-7. The folded light curve is roughly sinusoidal, with an amplitude that is 26% of the mean flux. The X-ray period may be attributed to the scattering and absorption of X-rays through a stellar wind combined with the orbital motion in a binary system. We suggest that X1908+075 is an X-ray binary with a high-mass companion star.
Composite hot subdwarf binaries - I. The spectroscopically confirmed sdB sample
NASA Astrophysics Data System (ADS)
Vos, Joris; Németh, Péter; Vučković, Maja; Østensen, Roy; Parsons, Steven
2018-01-01
Hot subdwarf-B (sdB) stars in long-period binaries are found to be on eccentric orbits, even though current binary-evolution theory predicts that these objects are circularized before the onset of Roche lobe overflow (RLOF). To increase our understanding of binary interaction processes during the RLOF phase, we started a long-term observing campaign to study wide sdB binaries. In this paper, we present a sample of composite binary sdBs, and the results of the spectral analysis of nine such systems. The grid search in stellar parameters (GSSP) code is used to derive atmospheric parameters for the cool companions. To cross-check our results and also to characterize the hot subdwarfs, we used the independent XTGRID code, which employs TLUSTY non-local thermodynamic equilibrium models to derive atmospheric parameters for the sdB component and PHOENIX synthetic spectra for the cool companions. The independent GSSP and XTGRID codes are found to show good agreement for three test systems that have atmospheric parameters available in the literature. Based on the rotational velocity of the companions, we make an estimate for the mass accreted during the RLOF phase and the minimum duration of that phase. We find that the mass transfer to the companion is minimal during the subdwarf formation.
NASA Astrophysics Data System (ADS)
Ma, Fanghui; Gao, Jian; Fu, Fang-Wei
2018-06-01
Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations
Motl, Patrick M.; Frank, Juhan; Staff, Jan; ...
2017-03-29
There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume "grid" code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code aremore » chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. Here, we also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less
Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.
We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. Wemore » test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.« less
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
M2Lite: An Open-source, Light-weight, Pluggable and Fast Proteome Discoverer MSF to mzIdentML Tool.
Aiyetan, Paul; Zhang, Bai; Chen, Lily; Zhang, Zhen; Zhang, Hui
2014-04-28
Proteome Discoverer is one of many tools used for protein database search and peptide to spectrum assignment in mass spectrometry-based proteomics. However, the inadequacy of conversion tools makes it challenging to compare and integrate its results to those of other analytical tools. Here we present M2Lite, an open-source, light-weight, easily pluggable and fast conversion tool. M2Lite converts proteome discoverer derived MSF files to the proteomics community defined standard - the mzIdentML file format. M2Lite's source code is available as open-source at https://bitbucket.org/paiyetan/m2lite/src and its compiled binaries and documentation can be freely downloaded at https://bitbucket.org/paiyetan/m2lite/downloads.
PyCorrFit-generic data evaluation for fluorescence correlation spectroscopy.
Müller, Paul; Schwille, Petra; Weidemann, Thomas
2014-09-01
We present a graphical user interface (PyCorrFit) for the fitting of theoretical model functions to experimental data obtained by fluorescence correlation spectroscopy (FCS). The program supports many data file formats and features a set of tools specialized in FCS data evaluation. The Python source code is freely available for download from the PyCorrFit web page at http://pycorrfit.craban.de. We offer binaries for Ubuntu Linux, Mac OS X and Microsoft Windows. © The Author 2014. Published by Oxford University Press.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Binary Neutron Stars with Arbitrary Spins in Numerical Relativity
NASA Astrophysics Data System (ADS)
Pfeiffer, Harald; Tacik, Nick; Foucart, Francois; Haas, Roland; Kaplan, Jeffrey; Muhlberger, Curran; Duez, Matt; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela
2015-04-01
We present a code to construct initial data for binary neutron star where the stars are rotating. Our code, based on the formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ~ 0 . 1 % . Preliminary evolutions show that spin- and orbit-precession of Neutron stars is well described by post-Newtonian approximation. The neutron stars show quasi-normal mode oscillations at an amplitude which increases with the rotation rate of the stars.
Binary neutron stars with arbitrary spins in numerical relativity
NASA Astrophysics Data System (ADS)
Tacik, Nick; Foucart, Francois; Pfeiffer, Harald P.; Haas, Roland; Ossokine, Serguei; Kaplan, Jeff; Muhlberger, Curran; Duez, Matt D.; Kidder, Lawrence E.; Scheel, Mark A.; Szilágyi, Béla
2015-12-01
We present a code to construct initial data for binary neutron star systems in which the stars are rotating. Our code, based on a formalism developed by Tichy, allows for arbitrary rotation axes of the neutron stars and is able to achieve rotation rates near rotational breakup. We compute the neutron star angular momentum through quasilocal angular momentum integrals. When constructing irrotational binary neutron stars, we find a very small residual dimensionless spin of ˜2 ×10-4 . Evolutions of rotating neutron star binaries show that the magnitude of the stars' angular momentum is conserved, and that the spin and orbit precession of the stars is well described by post-Newtonian approximation. We demonstrate that orbital eccentricity of the binary neutron stars can be controlled to ˜0.1 % . The neutron stars show quasinormal mode oscillations at an amplitude which increases with the rotation rate of the stars.
A biclustering algorithm for extracting bit-patterns from binary datasets.
Rodriguez-Baena, Domingo S; Perez-Pulido, Antonio J; Aguilar-Ruiz, Jesus S
2011-10-01
Binary datasets represent a compact and simple way to store data about the relationships between a group of objects and their possible properties. In the last few years, different biclustering algorithms have been specially developed to be applied to binary datasets. Several approaches based on matrix factorization, suffix trees or divide-and-conquer techniques have been proposed to extract useful biclusters from binary data, and these approaches provide information about the distribution of patterns and intrinsic correlations. A novel approach to extracting biclusters from binary datasets, BiBit, is introduced here. The results obtained from different experiments with synthetic data reveal the excellent performance and the robustness of BiBit to density and size of input data. Also, BiBit is applied to a central nervous system embryonic tumor gene expression dataset to test the quality of the results. A novel gene expression preprocessing methodology, based on expression level layers, and the selective search performed by BiBit, based on a very fast bit-pattern processing technique, provide very satisfactory results in quality and computational cost. The power of biclustering in finding genes involved simultaneously in different cancer processes is also shown. Finally, a comparison with Bimax, one of the most cited binary biclustering algorithms, shows that BiBit is faster while providing essentially the same results. The source and binary codes, the datasets used in the experiments and the results can be found at: http://www.upo.es/eps/bigs/BiBit.html dsrodbae@upo.es Supplementary data are available at Bioinformatics online.
On the development and applications of automated searches for eclipsing binary stars
NASA Astrophysics Data System (ADS)
Devor, Jonathan
Eclipsing binary star systems provide the most accurate method of measuring both the masses and radii of stars. Moreover, they enable testing tidal synchronization and circularization theories, as well as constraining models of stellar structure and dynamics. With the recent availability of large-scale multi-epoch photometric datasets, we are able to study eclipsing binary stars en masse. In this thesis, we analyzed 185,445 light curves from ten TrES fields, and 218,699 light curves from the OGLE II bulge fields. In order to manage such large quantities of data, we developed a pipeline with which we systematically identified eclipsing binaries, solved for their geometric orientations, and then found their components' absolute properties. Following this analysis, we assembled catalogs of eclipsing binaries with their models, computed statistical distributions of their properties, and located rare cases for further follow-up. Of particular importance are low-mass eclipsing binaries, which are rare, yet critical for resolving the ongoing mass-radius discrepancy between theoretical models and observations. To this end, we have discovered over a dozen new low-mass eclipsing binary candidates, and spectroscopically confirmed the masses of five of them. One of these confirmed candidates, T-Lyr1-17236, is especially interesting because of its uniquely long orbital period. We examined T-Lyr1-17236 in detail and found that it is consistent with the magnetic disruption hypothesis for explaining the observed mass-radius discrepancy. Both the source code of our pipeline and the complete list of our candidates are freely available.
Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.
ERIC Educational Resources Information Center
Pradels, Jean Louis
Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…
ERIC Educational Resources Information Center
Haro, Elizabeth K.; Haro, Luis S.
2014-01-01
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
Modeling mergers of known galactic systems of binary neutron stars
NASA Astrophysics Data System (ADS)
Feo, Alessandra; De Pietri, Roberto; Maione, Francesco; Löffler, Frank
2017-02-01
We present a study of the merger of six different known galactic systems of binary neutron stars (BNS) of unequal mass with a mass ratio between 0.75 and 0.99. Specifically, these systems are J1756-2251, J0737-3039A, J1906 + 0746, B1534 + 12, J0453 + 1559 and B1913 + 16. We follow the dynamics of the merger from the late stage of the inspiral process up to ∼20ms after the system has merged, either to form a hyper-massive neutron star (NS) or a rotating black hole (BH), using a semi-realistic equation of state (EOS), namely the seven-segment piece-wise polytropic SLy with a thermal component. For the most extreme of these systems (q = 0.75, J0453 + 1559), we also investigate the effects of different EOSs: APR4, H4, and MS1. Our numerical simulations are performed using only publicly available open source code such as, the Einstein toolkit code deployed for the dynamical evolution and the LORENE code for the generation of the initial models. We show results on the gravitational wave signals, spectrogram and frequencies of the BNS after the merger and the BH properties in the two cases in which the system collapses within the simulated time.
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
Characteristic Evolution and Matching
NASA Astrophysics Data System (ADS)
Winicour, Jeffrey
2012-01-01
I review the development of numerical evolution codes for general relativity based upon the characteristic initial-value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D-axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black-hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black-hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.
A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motl, Patrick M.; Frank, Juhan; Clayton, Geoffrey C.
2017-04-01
There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to matchmore » as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less
Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity
NASA Astrophysics Data System (ADS)
Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration
2016-03-01
Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.
Extending the capability of GYRE to calculate tidally forced stellar oscillations
NASA Astrophysics Data System (ADS)
Guo, Zhao; Gies, Douglas R.
2016-01-01
Tidally forced oscillations have been observed in many eccentric binary systems, such as KOI-54 and many other 'heart beat stars'. The tidal response of the star can be calculated by solving a revised stellar oscillations equations.The open-source stellar oscillation code GYRE (Townsend & Teitler 2013) can be used to solve the free stellar oscillation equations in both adiabatic and non-adiabatic cases. It uses a novel matrix exponential method which avoids many difficulties of the classical shooting and relaxation method. The new version also includes the effect of rotation in traditional approximation.After showing the code flow of GYRE, we revise its subroutines and extend its capability to calculate tidallyforced oscillations in both adiabatic and non-adiabatic cases following the procedure in the CAFein code (Valsecchi et al. 2013). In the end, we compare the tidal eigenfunctions with those calculated from CAFein.More details of the revision and a simple version of the code in MATLAB can be obtained upon request.
NASA Astrophysics Data System (ADS)
Kaplan, Jeffrey Daniel
2014-01-01
Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnetism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.
MODELING MULTI-WAVELENGTH STELLAR ASTROMETRY. I. SIM LITE OBSERVATIONS OF INTERACTING BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coughlin, Jeffrey L.; Harrison, Thomas E.; Gelino, Dawn M.
Interacting binaries (IBs) consist of a secondary star that fills or is very close to filling its Roche lobe, resulting in accretion onto the primary star, which is often, but not always, a compact object. In many cases, the primary star, secondary star, and the accretion disk can all be significant sources of luminosity. SIM Lite will only measure the photocenter of an astrometric target, and thus determining the true astrometric orbits of such systems will be difficult. We have modified the Eclipsing Light Curve code to allow us to model the flux-weighted reflex motions of IBs, in a codemore » we call REFLUX. This code gives us sufficient flexibility to investigate nearly every configuration of IB. We find that SIM Lite will be able to determine astrometric orbits for all sufficiently bright IBs where the primary or secondary star dominates the luminosity. For systems where there are multiple components that comprise the spectrum in the optical bandpass accessible to SIM Lite, we find it is possible to obtain absolute masses for both components, although multi-wavelength photometry will be required to disentangle the multiple components. In all cases, SIM Lite will at least yield accurate inclinations and provide valuable information that will allow us to begin to understand the complex evolution of mass-transferring binaries. It is critical that SIM Lite maintains a multi-wavelength capability to allow for the proper deconvolution of the astrometric orbits in multi-component systems.« less
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, A.; Conroy, K. E.; Horvat, M.
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but weremore » not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.« less
Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1999-01-01
An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
A flexible surface wetness sensor using a RFID technique.
Yang, Cheng-Hao; Chien, Jui-Hung; Wang, Bo-Yan; Chen, Ping-Hei; Lee, Da-Sheng
2008-02-01
This paper presents a flexible wetness sensor whose detection signal, converted to a binary code, is transmitted through radio-frequency (RF) waves from a radio-frequency identification integrated circuit (RFID IC) to a remote reader. The flexible sensor, with a fixed operating frequency of 13.56 MHz, contains a RFID IC and a sensor circuit that is fabricated on a flexible printed circuit board (FPCB) using a Micro-Electro-Mechanical-System (MEMS) process. The sensor circuit contains a comb-shaped sensing area surrounded by an octagonal antenna with a width of 2.7 cm. The binary code transmitted from the RFIC to the reader changes if the surface conditions of the detector surface changes from dry to wet. This variation in the binary code can be observed on a digital oscilloscope connected to the reader.
Einstein observations of selected close binaries and shell stars
NASA Technical Reports Server (NTRS)
Guinan, E. F.; Koch, R. H.; Plavec, M. J.
1984-01-01
Several evolved close binaries and shell stars were observed with the IPC aboard the HEAO 2 Einstein Observatory. No eclipsing target was detected, and only two of the shell binaries were detected. It is argued that there is no substantial difference in L(X) for eclipsing and non-eclipsing binaries. The close binary and shell star CX Dra was detected as a moderately strong source, and the best interpretation is that the X-ray flux arises primarily from the corona of the cool member of the binary at about the level of Algol-like or RS CVn-type sources. The residual visible-band light curve of this binary has been modeled so as to conform as well as possible with this interpretation. HD 51480 was detected as a weak source. Substantial background information from IUE and ground scanner measurements are given for this binary. The positions and flux values of several accidentally detected sources are given.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Centrella, John
2009-01-01
The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GEO600, as well as the space-based LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. And, when the black holes merge in the presence of gas and magnetic fields, various types of electromagnetic signals may also be produced. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, testing general relativity, and astrophysics.
NASA Astrophysics Data System (ADS)
Centrella, Joan
2009-05-01
The final merger of two black holes is expected to be the strongest gravitational wave source for ground-based interferometers such as LIGO, VIRGO, and GEO600, as well as the space-based LISA. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. And, when the black holes merge in the presence of gas and magnetic fields, various types of electromagnetic signals may also be produced. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new simulations that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, testing general relativity, and astrophysics.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1997-01-01
Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.
A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas
NASA Astrophysics Data System (ADS)
Gudehus, D. H.
2001-05-01
A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
NASA Astrophysics Data System (ADS)
Eldridge, John J.; Stanway, Elizabeth R.
2012-01-01
Young, massive stars dominate the rest-frame ultraviolet (UV) spectra of star-forming galaxies. At high redshifts (z > 2), these rest-frame UV features are shifted into the observed-frame optical and a combination of gravitational lensing, deep spectroscopy and spectral stacking analysis allows the stellar population characteristics of these sources to be investigated. We use our stellar population synthesis code Binary Population and Spectral Synthesis (BPASS) to fit two strong rest-frame UV spectral features in published Lyman-break galaxy spectra, taking into account the effects of binary evolution on the stellar spectrum. In particular, we consider the effects of quasi-homogeneous evolution (arising from the rotational mixing of rapidly rotating stars), metallicity and the relative abundance of carbon and oxygen on the observed strengths of He IIλ1640 Å and C IVλ1548, 1551 Å spectral lines. We find that Lyman-break galaxy spectra at z ˜ 2-3 are best fitted with moderately sub-solar metallicities, and with a depleted carbon-to-oxygen ratio. We also find that the spectra of the lowest metallicity sources are best fitted with model spectra in which the He II emission line is boosted by the inclusion of the effect of massive stars being spun-up during binary mass transfer so these rapidly rotating stars experience quasi-homogeneous evolution.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin
2009-01-01
Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger vein recognition based on local directional code.
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-11-05
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
A m-ary linear feedback shift register with binary logic
NASA Technical Reports Server (NTRS)
Perlman, M. (Inventor)
1973-01-01
A family of m-ary linear feedback shift registers with binary logic is disclosed. Each m-ary linear feedback shift register with binary logic generates a binary representation of a nonbinary recurring sequence, producible with a m-ary linear feedback shift register without binary logic in which m is greater than 2. The state table of a m-ary linear feedback shift register without binary logic, utilizing sum modulo m feedback, is first tubulated for a given initial state. The entries in the state table are coded in binary and the binary entries are used to set the initial states of the stages of a plurality of binary shift registers. A single feedback logic unit is employed which provides a separate feedback binary digit to each binary register as a function of the states of corresponding stages of the binary registers.
DLRS: gene tree evolution in light of a species tree.
Sjöstrand, Joel; Sennblad, Bengt; Arvestad, Lars; Lagergren, Jens
2012-11-15
PrIME-DLRS (or colloquially: 'Delirious') is a phylogenetic software tool to simultaneously infer and reconcile a gene tree given a species tree. It accounts for duplication and loss events, a relaxed molecular clock and is intended for the study of homologous gene families, for example in a comparative genomics setting involving multiple species. PrIME-DLRS uses a Bayesian MCMC framework, where the input is a known species tree with divergence times and a multiple sequence alignment, and the output is a posterior distribution over gene trees and model parameters. PrIME-DLRS is available for Java SE 6+ under the New BSD License, and JAR files and source code can be downloaded from http://code.google.com/p/jprime/. There is also a slightly older C++ version available as a binary package for Ubuntu, with download instructions at http://prime.sbc.su.se. The C++ source code is available upon request. joel.sjostrand@scilifelab.se or jens.lagergren@scilifelab.se. PrIME-DLRS is based on a sound probabilistic model (Åkerborg et al., 2009) and has been thoroughly validated on synthetic and biological datasets (Supplementary Material online).
The National Transport Code Collaboration Module Library
NASA Astrophysics Data System (ADS)
Kritz, A. H.; Bateman, G.; Kinsey, J.; Pankin, A.; Onjun, T.; Redd, A.; McCune, D.; Ludescher, C.; Pletzer, A.; Andre, R.; Zakharov, L.; Lodestro, L.; Pearlstein, L. D.; Jong, R.; Houlberg, W.; Strand, P.; Wiley, J.; Valanju, P.; John, H. St.; Waltz, R.; Mandrekas, J.; Mau, T. K.; Carlsson, J.; Braams, B.
2004-12-01
This paper reports on the progress in developing a library of code modules under the auspices of the National Transport Code Collaboration (NTCC). Code modules are high quality, fully documented software packages with a clearly defined interface. The modules provide a variety of functions, such as implementing numerical physics models; performing ancillary functions such as I/O or graphics; or providing tools for dealing with common issues in scientific programming such as portability of Fortran codes. Researchers in the plasma community submit code modules, and a review procedure is followed to insure adherence to programming and documentation standards. The review process is designed to provide added confidence with regard to the use of the modules and to allow users and independent reviews to validate the claims of the modules' authors. All modules include source code; clear instructions for compilation of binaries on a variety of target architectures; and test cases with well-documented input and output. All the NTCC modules and ancillary information, such as current standards and documentation, are available from the NTCC Module Library Website http://w3.pppl.gov/NTCC. The goal of the project is to develop a resource of value to builders of integrated modeling codes and to plasma physics researchers generally. Currently, there are more than 40 modules in the module library.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes
NASA Astrophysics Data System (ADS)
Rutter, M. J.
2018-04-01
The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
General Relativistic Simulations of Magnetized Plasmas Around Merging Supermassive Black Holes
NASA Technical Reports Server (NTRS)
Giacomazzo, Bruno; Baker, John G.; Miller, M. Coleman; Reynolds, Christopher S.; van Meter, James R.
2012-01-01
Coalescing supermassive black hole binaries are produced by the mergers of galaxies and are the most powerful sources of gravitational waves accessible to space-based gravitational observatories. Some such mergers may occur in the presence of matter and magnetic fields and hence generate an electromagnetic counterpart. In this paper we present the first general relativistic simulations of magnetized plasma around merging supermassive black holes using the general relativistic magnetohydrodynamic code Whisky. By considering different magnetic field strengths, going from non-magnetically dominated to magnetically dominated regimes, we explore how magnetic fields affect the dynamics of the plasma and the possible emission of electromagnetic signals. In particular we observe, total amplification of the magnetic field of approx 2 orders of magnitude which is driven by the accretion onto the binary and that leads to stronger electromagnetic signals than in the force-free regime where such amplifications are not possible.
Studying Tidal Effects In Planetary Systems With Posidonius. A N-Body Simulator Written In Rust.
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, Sergi; Bolmont, Emeline
2017-10-01
Planetary systems with several planets in compact orbital configurations such as TRAPPIST-1 are surely affected by tidal effects. Its study provides us with important insight about its evolution. We developed a second generation of a N-body code based on the tidal model used in Mercury-T, re-implementing and improving its functionalities using Rust as programming language (including a Python interface for easy use) and the WHFAST integrator. The new open source code ensures memory safety, reproducibility of numerical N-body experiments, it improves the spin integration compared to Mercury-T and allows to take into account a new prescription for the dissipation of tidal inertial waves in the convective envelope of stars. Posidonius is also suitable for binary system simulations with evolving stars.
JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices
NASA Astrophysics Data System (ADS)
Bollhöfer, Matthias; Notay, Yvan
2007-12-01
A new software code for computing selected eigenvalues and associated eigenvectors of a real symmetric matrix is described. The eigenvalues are either the smallest or those closest to some specified target, which may be in the interior of the spectrum. The underlying algorithm combines the Jacobi-Davidson method with efficient multilevel incomplete LU (ILU) preconditioning. Key features are modest memory requirements and robust convergence to accurate solutions. Parameters needed for incomplete LU preconditioning are automatically computed and may be updated at run time depending on the convergence pattern. The software is easy to use by non-experts and its top level routines are written in FORTRAN 77. Its potentialities are demonstrated on a few applications taken from computational physics. Program summaryProgram title: JADAMILU Catalogue identifier: ADZT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 359 No. of bytes in distributed program, including test data, etc.: 7 493 144 Distribution format: tar.gz Programming language: Fortran 77 Computer: Intel or AMD with g77 and pgf; Intel EM64T or Itanium with ifort; AMD Opteron with g77, pgf and ifort; Power (IBM) with xlf90. Operating system: Linux, AIX RAM: problem dependent Word size: real:8; integer: 4 or 8, according to user's choice Classification: 4.8 Nature of problem: Any physical problem requiring the computation of a few eigenvalues of a symmetric matrix. Solution method: Jacobi-Davidson combined with multilevel ILU preconditioning. Additional comments: We supply binaries rather than source code because JADAMILU uses the following external packages: MC64. This software is copyrighted software and not freely available. COPYRIGHT (c) 1999 Council for the Central Laboratory of the Research Councils. AMD. Copyright (c) 2004-2006 by Timothy A. Davis, Patrick R. Amestoy, and Iain S. Duff. Source code is distributed by the authors under the GNU LGPL licence. BLAS. The reference BLAS is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web. LAPACK. The complete LAPACK package or individual routines from LAPACK are freely available on netlib and can be obtained via the World Wide Web or anonymous ftp. For maximal benefit to the community, we added the sources we are proprietary of to the tar.gz file submitted for inclusion in the CPC library. However, as explained in the README file, users willing to compile the code instead of using binaries should first obtain the sources for the external packages mentioned above (email and/or web addresses are provided). Running time: Problem dependent; the test examples provided with the code only take a few seconds to run; timing results for large scale problems are given in Section 5.
Visibility of Active Galactic Nuclei in the Illustris Simulation
NASA Astrophysics Data System (ADS)
Hutchinson-Smith, Tenley; Kelley, Luke; Moreno, Jorge; Hernquist, Lars; Illustris Collaboration
2018-01-01
Active galactic nuclei (AGN) are the very bright, luminous regions surrounding supermassive black holes (SMBH) located at the centers of galaxies. Supermassive black holes are the source of AGN feedback, which occurs once the SMBH reaches a certain critical mass. Almost all large galaxies contain a SMBH, but SMBH binaries are extremely rare. Finding these binary systems are important because it can be a source of gravitational waves if the two SMBH collide. In order to study supermassive black holes, astronomers will often rely on the AGN’s light in order to locate them, but this can be difficult due to the extinction of light caused by the dust and gas surrounding the AGN. My research project focuses on determining the fraction of light we can observe from galactic centers using the Illustris simulation, one of the most advanced cosmological simulations of the universe which was created using a hydrodynamic code and consists of a moving mesh. Measuring the fraction of light observable from galactic centers will help us know what fraction of the time we can observe dual and binary AGN in different galaxies, which would also imply a binary SMBH system. In order to find how much light is being blocked or scattered by the gas and dust surrounding the AGN, we calculated the density of the gas and dust along the lines of sight. I present results including the density of gas along different lines of sight and how it correlates with the image of the galaxy. Future steps include taking an average of the column densities for all the galaxies in Illustris and studying them as a function of galaxy type (before merger, during merger, and post-merger), which will give us information on how this can also affect the AGN luminosity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wen-Cong; Podsiadlowski, Philipp, E-mail: chenwc@pku.edu.cn
2016-10-20
It is generally believed that ultracompact X-ray binaries (UCXBs) evolved from binaries consisting of a neutron star accreting from a low-mass white dwarf (WD) or helium star where mass transfer is driven by gravitational radiation. However, the standard WD evolutionary channel cannot produce the relatively long-period (40–60 minutes) UCXBs with a high time-averaged mass-transfer rate. In this work, we explore an alternative evolutionary route toward UCXBs, where the companions evolve from intermediate-mass Ap/Bp stars with an anomalously strong magnetic field (100–10,000 G). Including the magnetic braking caused by the coupling between the magnetic field and an irradiation-driven wind induced bymore » the X-ray flux from the accreting component, we show that intermediate-mass X-ray binaries (IMXBs) can evolve into UCXBs. Using the MESA code, we have calculated evolutionary sequences for a large number of IMXBs. The simulated results indicate that, for a small wind-driving efficiency f = 10{sup −5}, the anomalous magnetic braking can drive IMXBs to an ultra-short period of 11 minutes. Comparing our simulated results with the observed parameters of 15 identified UCXBs, the anomalous magnetic braking evolutionary channel can account for the formation of seven and eight sources with f = 10{sup −3}, and 10{sup −5}, respectively. In particular, a relatively large value of f can fit three of the long-period, persistent sources with a high mass-transfer rate. Though the proportion of Ap/Bp stars in intermediate-mass stars is only 5%, the lifetime of the UCXB phase is ≳2 Gyr, producing a relatively high number of observable systems, making this an alternative evolutionary channel for the formation of UCXBs.« less
Compact binary hashing for music retrieval
NASA Astrophysics Data System (ADS)
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
A new technique for calculations of binary stellar evolution, with application to magnetic braking
NASA Technical Reports Server (NTRS)
Rappaport, S.; Joss, P. C.; Verbunt, F.
1983-01-01
The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
NASA Astrophysics Data System (ADS)
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
Gene-specific cell labeling using MiMIC transposons
Gnerer, Joshua P.; Venken, Koen J. T.; Dierick, Herman A.
2015-01-01
Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. PMID:25712101
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
The atmospheric structures of the companion stars of eclipsing binary x ray sources
NASA Technical Reports Server (NTRS)
Clark, George W.
1992-01-01
This investigation was aimed at determining structural features of the atmospheres of the massive early-type companion stars of eclipse x-ray pulsars by measurement of the attenuation of the x-ray spectrum during eclipse transitions and in deep eclipse. Several extended visits were made to ISAS in Japan by G. Clark and his graduate student, Jonathan Woo to coordinate the Ginga observations and preliminary data reduction, and to work with the Japanese host scientist, Fumiaki Nagase, in the interpretation of the data. At MIT extensive developments were made in software systems for data interpretation. In particular, a Monte Carlo code was developed for a 3-D simulation of the propagation of x-rays from the neutron star through the ionized atmosphere of the companion. With this code it was possible to determine the spectrum of Compton-scattered x-rays in deep eclipse and to subtract that component from the observed spectra, thereby isolating the software component that is attributable in large measure to x-rays that have been scattered by interstellar grains. This research has culminated in the submission of paper to the Astrophysical Journal on the determination of properties of the atmosphere of QV Nor, the BOI companion of 4U 1538-52, and the properties of interstellar dust grains along the line of sight from the source. The latter results were an unanticipated byproduct of the investigation. Data from Ginga observations of the Magellanic binaries SMC X-1 and LMC X-4 are currently under investigation as the PhD thesis project of Jonathan Woo who anticipated completion in the spring of 1993.
Chopper-stabilized phase detector
NASA Technical Reports Server (NTRS)
Hopkins, P. M.
1978-01-01
Phase-detector circuit for binary-tracking loops and other binary-data acquisition systems minimizes effects of drift, gain imbalance, and voltage offset in detector circuitry. Input signal passes simultaneously through two channels where it is mixed with early and late codes that are alternately switched between channels. Code switching is synchronized with polarity switching of detector output of each channel so that each channel uses each detector for half time. Net result is that dc offset errors are canceled, and effect of gain imbalance is simply change in sensitivity.
SPV: a JavaScript Signaling Pathway Visualizer.
Calderone, Alberto; Cesareni, Gianni
2018-03-24
The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
NASA Astrophysics Data System (ADS)
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
Unipro UGENE: a unified bioinformatics toolkit.
Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail
2012-04-15
Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.
2018-03-01
We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.
DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2015-01-01
Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.
GPU accelerated manifold correction method for spinning compact binaries
NASA Astrophysics Data System (ADS)
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
Binary Black Holes and Gravitational Waves
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes releases a tremendous amount of energy, more than the combined light from all the stars in the visible universe. This energy is emitted in the form of gravitational waves, and observing these sources with gravitational wave detectors such as LIGO and LISA requires that we know the pattern or fingerprint of the radiation emitted. Since black hole mergers take place in regions of extreme gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these wave patterns. For more than 30 years, scientists have tried to compute these wave patterns. However, their computer codes have been plagued by problems that caused them to crash. This situation has changed dramatically in the past 2 years, with a series of amazing breakthroughs. This discussion examines these gravitational patterns, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. The focus is on recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by the space-based gravitational wave detector LISA.
Binary Black Holes, Numerical Relativity, and Gravitational Waves
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes releases a tremendous amount of energy, more than the combined light from all the stars in the visible universe. This energy is emitted in the form of gravitational waves, and observing these sources with gravitational wave detectors such as LISA requires that we know the pattern or fingerprint of the radiation emitted. Since black hole mergers take place in regions of extreme gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these wave patterns. For more than 30 years, scientists have tried to compute these wave patterns. However, their computer codes have been plagued by problems that caused them to crash. This situation has changed dramatically in the past 2 years, with a series of amazing breakthroughs. This talk will take you on this quest for these gravitational wave patterns, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LISA
Cosmic Messengers: Binary Black Holes and Gravitational Waves
NASA Technical Reports Server (NTRS)
Centrella, Joan
2007-01-01
The final merger of two black holes releases a tremendous amount of energy, more than the combined light from all the stars in the visible universe. This energy is emitted in the form of gravitational waves, and observing these sources with gravitational wave detectors such as LISA requires that we know the pattern or fingerprint of the radiation emitted. Since black hole mergers take place in regions of extreme gravitational fields, we need to solve Einstein s equations of general relativity on a computer in order to calculate these wave patterns. For more than 30 years, scientists have tried to compute these wave patterns. However, their computer codes have been plagued by problems that caused them to crash. . This situation has changed dramatically in the past 2 years, with a series of amazing breakthroughs. This talk will take you on this quest for these gravitational wave patterns, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will. be observed by LISA.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
Rotating and Binary Stars in General Relativit
NASA Astrophysics Data System (ADS)
Shapiro, Stuart
The inspiral and coalescence of compact binary stars is one of the most challenging problems in theoretical astrophysics. Only recently have advances in numerical relativity made it possible to explore this topic in full general relativity (GR). The mergers of compact binaries have important consequences for the detection of gravitational waves. In addition, the coalescence of binary neutron stars (NSNSs) and binary black-hole neutron stars (BHNSs) may hold the key for resolving other astrophysical puzzles, such as the origin of short-hard gamma-ray bursts (GRBs). While simulations of these systems in full GR are now possible, only the most idealized treatments have been performed to date. More detailed physics, including magnetic fields, black hole spin, a realistic hot, nuclear equation of state and neutrino transport must be incorporated. Only then will we be able to identify reliably future sources that may be detected simultaneously in gravitational waves and as GRBs. Likewise, the coalescence of binary black holes (BHBHs) is now a solved problem in GR, but only in vacuum. Simulating the coalescence of BHBHs in the gaseous environments likely to be found in nearby galaxy cores or in merging galaxies is crucial to identifying an electromagnetic signal that might accompany the gravitational waves produced during the merger. The coalescence of a binary white dwarf-neutron star (WDNS) has only recently been treated in GR, but GR is necessary to explore tidal disruption scenarios in which the capture of WD debris by the NS may lead to catastrophic collapse. Alternatively, the NS may survive and the merger might result in the formation of pulsar planets. The stability of rotating neutron stars in these and other systems has not been fully explored in GR, and the final fate of unstable stars has not been determined in many cases, especially in the presence of magnetic fields and differential rotation. These systems will be probed observationally by current NASA instruments, such as HST, CHANDRA, SWIFT and FERMI, and by future NASA detectors, such as NuStar, ASTRO-H, GEMS, JWST, and, possibly, GEN-X and SGO (a Space-Based Gravitational-Wave Observatory). Treating all of these phenomena theoretically requires the same computational machinery: a fully relativistic code that simultaneously solves Einstein s equations for the gravitational field, Maxwell s equations for the electromagnetic field and the equations of relativistic magnetohydrodynamics for the plasma, all in three spatial dimensions plus time. Recent advances we have made in constructing such a code now make it possible for us to solve these fundamental, closely related computational problems, some for the first time.
Invited Article: Mask-modulated lensless imaging with multi-angle illuminations
NASA Astrophysics Data System (ADS)
Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan
2018-06-01
The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.
Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.
2017-08-23
Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.
Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less
Castro-Chavez, Fernando
2014-01-01
Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for educational purposes. PMID:25340175
Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base
NASA Astrophysics Data System (ADS)
Savage, B.; Snoke, J. A.
2017-12-01
The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years, SAC files contained a fixed-length header. Time and distance-related values are stored in single precision, which has become a problem with the increase in desired precision for data compared to thirty years ago. A future goal is to address this precision problem, but in a backward compatible manner. We would also like to transition SAC to a more open source license.
EVOLUTION OF CATACLYSMIC VARIABLES AND RELATED BINARIES CONTAINING A WHITE DWARF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalomeni, B.; Rappaport, S.; Molnar, M.
We present a binary evolution study of cataclysmic variables (CVs) and related systems with white dwarf (WD) accretors, including for example, AM CVn systems, classical novae, supersoft X-ray sources (SXSs), and systems with giant donor stars. Our approach intentionally avoids the complications associated with population synthesis algorithms, thereby allowing us to present the first truly comprehensive exploration of all of the subsequent binary evolution pathways that zero-age CVs might follow (assuming fully non-conservative, Roche-lobe overflow onto an accreting WD) using the sophisticated binary stellar evolution code MESA. The grid consists of 56,000 initial models, including 14 WD accretor masses, 43more » donor-star masses (0.1–4.7 M {sub ⊙}), and 100 orbital periods. We explore evolution tracks in the orbital period and donor-mass ( P {sub orb}– M {sub don}) plane in terms of evolution dwell times, masses of the WD accretor, accretion rate, and chemical composition of the center and surface of the donor star. We report on the differences among the standard CV tracks, those with giant donor stars, and ultrashort period systems. We show where in parameter space one can expect to find SXSs, present a diagnostic to distinguish among different evolutionary paths to forming AM CVn binaries, quantify how the minimum orbital period in CVs depends on the chemical composition of the donor star, and update the P {sub orb}( M {sub wd}) relation for binaries containing WDs whose progenitors lost their envelopes via stable Roche-lobe overflow. Finally, we indicate where in the P {sub orb}– M {sub don} the accretion disks will tend to be stable against the thermal-viscous instability, and where gravitational radiation signatures may be found with LISA.« less
Evolution of Cataclysmic Variables and Related Binaries Containing a White Dwarf
NASA Astrophysics Data System (ADS)
Kalomeni, B.; Nelson, L.; Rappaport, S.; Molnar, M.; Quintin, J.; Yakut, K.
2016-12-01
We present a binary evolution study of cataclysmic variables (CVs) and related systems with white dwarf (WD) accretors, including for example, AM CVn systems, classical novae, supersoft X-ray sources (SXSs), and systems with giant donor stars. Our approach intentionally avoids the complications associated with population synthesis algorithms, thereby allowing us to present the first truly comprehensive exploration of all of the subsequent binary evolution pathways that zero-age CVs might follow (assuming fully non-conservative, Roche-lobe overflow onto an accreting WD) using the sophisticated binary stellar evolution code MESA. The grid consists of 56,000 initial models, including 14 WD accretor masses, 43 donor-star masses (0.1-4.7 M ⊙), and 100 orbital periods. We explore evolution tracks in the orbital period and donor-mass (P orb-M don) plane in terms of evolution dwell times, masses of the WD accretor, accretion rate, and chemical composition of the center and surface of the donor star. We report on the differences among the standard CV tracks, those with giant donor stars, and ultrashort period systems. We show where in parameter space one can expect to find SXSs, present a diagnostic to distinguish among different evolutionary paths to forming AM CVn binaries, quantify how the minimum orbital period in CVs depends on the chemical composition of the donor star, and update the P orb(M wd) relation for binaries containing WDs whose progenitors lost their envelopes via stable Roche-lobe overflow. Finally, we indicate where in the P orb-M don the accretion disks will tend to be stable against the thermal-viscous instability, and where gravitational radiation signatures may be found with LISA.
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Synergism and Combinatorial Coding for Binary Odor Mixture Perception in Drosophila
Chakraborty, Tuhin Subhra; Siddiqi, Obaid
2016-01-01
Most odors in the natural environment are mixtures of several compounds. Olfactory receptors housed in the olfactory sensory neurons detect these odors and transmit the information to the brain, leading to decision-making. But whether the olfactory system detects the ingredients of a mixture separately or treats mixtures as different entities is not well understood. Using Drosophila melanogaster as a model system, we have demonstrated that fruit flies perceive binary odor mixtures in a manner that is heavily dependent on both the proportion and the degree of dilution of the components, suggesting a combinatorial coding at the peripheral level. This coding strategy appears to be receptor specific and is independent of interneuronal interactions. PMID:27588303
Candidate Binary Microlensing Events from the MACHO Project
NASA Astrophysics Data System (ADS)
Becker, A. C.; Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.; King, L. J.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Rodgers, A. W.; Stubbs, C. W.; Sutherland, W.; Tomaney, A.; Vandehei, T.; Welch, D. L.; Baines, D.; Brakel, A.; Crook, B.; Howard, J.; Leach, T.; McDowell, D.; McKeown, S.; Mitchell, J.; Moreland, J.; Pozza, E.; Purcell, P.; Ring, S.; Salmon, A.; Ward, K.; Wyper, G.; Heller, A.; Kaspi, S.; Kovo, O.; Maoz, D.; Retter, A.; Rhie, S. H.; Stetson, P.; Walker, A.; MACHO Collaboration
1998-12-01
We present the lightcurves of 22 gravitational microlensing events from the first six years of the MACHO Project gravitational microlensing survey which are likely examples of lensing by binary systems. These events were selected from a total sample of ~ 300 events which were either detected by the MACHO Alert System or discovered through retrospective analyses of the MACHO database. Many of these events appear to have undergone a caustic or cusp crossing, and 2 of the events are well fit with lensing by binary systems with large mass ratios, indicating secondary companions of approximately planetary mass. The event rate is roughly consistent with predictions based upon our knowledge of the properties of binary stars. The utility of binary lensing in helping to solve the Galactic dark matter problem is demonstrated with analyses of 3 binary microlensing events seen towards the Magellanic Clouds. Source star resolution during caustic crossings in 2 of these events allows us to estimate the location of the lensing systems, assuming each source is a single star and not a short period binary. * MACHO LMC-9 appears to be a binary lensing event with a caustic crossing partially resolved in 2 observations. The resulting lens proper motion appears too small for a single source and LMC disk lens. However, it is considerably less likely to be a single source star and Galactic halo lens. We estimate the a priori probability of a short period binary source with a detectable binary character to be ~ 10 %. If the source is also a binary, then we currently have no constraints on the lens location. * The most recent of these events, MACHO 98-SMC-1, was detected in real-time. Follow-up observations by the MACHO/GMAN, PLANET, MPS, EROS and OGLE microlensing collaborations lead to the robust conclusion that the lens likely resides in the SMC.
Context-Aware Local Binary Feature Learning for Face Recognition.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2018-05-01
In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.
Gene-specific cell labeling using MiMIC transposons.
Gnerer, Joshua P; Venken, Koen J T; Dierick, Herman A
2015-04-30
Binary expression systems such as GAL4/UAS, LexA/LexAop and QF/QUAS have greatly enhanced the power of Drosophila as a model organism by allowing spatio-temporal manipulation of gene function as well as cell and neural circuit function. Tissue-specific expression of these heterologous transcription factors relies on random transposon integration near enhancers or promoters that drive the binary transcription factor embedded in the transposon. Alternatively, gene-specific promoter elements are directly fused to the binary factor within the transposon followed by random or site-specific integration. However, such insertions do not consistently recapitulate endogenous expression. We used Minos-Mediated Integration Cassette (MiMIC) transposons to convert host loci into reliable gene-specific binary effectors. MiMIC transposons allow recombinase-mediated cassette exchange to modify the transposon content. We developed novel exchange cassettes to convert coding intronic MiMIC insertions into gene-specific binary factor protein-traps. In addition, we expanded the set of binary factor exchange cassettes available for non-coding intronic MiMIC insertions. We show that binary factor conversions of different insertions in the same locus have indistinguishable expression patterns, suggesting that they reliably reflect endogenous gene expression. We show the efficacy and broad applicability of these new tools by dissecting the cellular expression patterns of the Drosophila serotonin receptor gene family. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
SCRAM: a pipeline for fast index-free small RNA read alignment and visualization.
Fletcher, Stephen J; Boden, Mikael; Mitter, Neena; Carroll, Bernard J
2018-03-15
Small RNAs play key roles in gene regulation, defense against viral pathogens and maintenance of genome stability, though many aspects of their biogenesis and function remain to be elucidated. SCRAM (Small Complementary RNA Mapper) is a novel, simple-to-use short read aligner and visualization suite that enhances exploration of small RNA datasets. The SCRAM pipeline is implemented in Go and Python, and is freely available under MIT license. Source code, multiplatform binaries and a Docker image can be accessed via https://sfletc.github.io/scram/. s.fletcher@uq.edu.au. Supplementary data are available at Bioinformatics online.
CytoSPADE: high-performance analysis and visualization of high-dimensional cytometry data
Linderman, Michael D.; Simonds, Erin F.; Qiu, Peng; Bruggner, Robert V.; Sheode, Ketaki; Meng, Teresa H.; Plevritis, Sylvia K.; Nolan, Garry P.
2012-01-01
Motivation: Recent advances in flow cytometry enable simultaneous single-cell measurement of 30+ surface and intracellular proteins. CytoSPADE is a high-performance implementation of an interface for the Spanning-tree Progression Analysis of Density-normalized Events algorithm for tree-based analysis and visualization of this high-dimensional cytometry data. Availability: Source code and binaries are freely available at http://cytospade.org and via Bioconductor version 2.10 onwards for Linux, OSX and Windows. CytoSPADE is implemented in R, C++ and Java. Contact: michael.linderman@mssm.edu Supplementary Information: Additional documentation available at http://cytospade.org. PMID:22782546
Pritoni, Marco; Ford, Rebecca; Karlin, Beth; Sanguinetti, Angela
2018-02-01
Policymakers worldwide are currently discussing whether to include home energy management (HEM) products in their portfolio of technologies to reduce carbon emissions and improve grid reliability. However, very little data is available about these products. Here we present the results of an extensive review including 308 HEM products available on the US market in 2015-2016. We gathered these data from publicly available sources such as vendor websites, online marketplaces and other vendor documents. A coding guide was developed iteratively during the data collection and utilized to classify the devices. Each product was coded based on 96 distinct attributes, grouped into 11 categories: Identifying information, Product components, Hardware, Communication, Software, Information - feedback, Information - feedforward, Control, Utility interaction, Additional benefits and Usability. The codes describe product features and functionalities, user interaction and interoperability with other devices. A mix of binary attributes and more descriptive codes allow to sort and group data without losing important qualitative information. The information is stored in a large spreadsheet included with this article, along with an explanatory coding guide. This dataset is analyzed and described in a research article entitled "Categories and functionality of smart home technology for energy management" (Ford et al., 2017) [1].
The Evolution of Compact Binary Star Systems.
Postnov, Konstantin A; Yungelson, Lev R
2006-01-01
We review the formation and evolution of compact binary stars consisting of white dwarfs (WDs), neutron stars (NSs), and black holes (BHs). Binary NSs and BHs are thought to be the primary astrophysical sources of gravitational waves (GWs) within the frequency band of ground-based detectors, while compact binaries of WDs are important sources of GWs at lower frequencies to be covered by space interferometers (LISA). Major uncertainties in the current understanding of properties of NSs and BHs most relevant to the GW studies are discussed, including the treatment of the natal kicks which compact stellar remnants acquire during the core collapse of massive stars and the common envelope phase of binary evolution. We discuss the coalescence rates of binary NSs and BHs and prospects for their detections, the formation and evolution of binary WDs and their observational manifestations. Special attention is given to AM CVn-stars - compact binaries in which the Roche lobe is filled by another WD or a low-mass partially degenerate helium-star, as these stars are thought to be the best LISA verification binary GW sources.
Binary Multidimensional Scaling for Hashing.
Huang, Yameng; Lin, Zhouchen
2017-10-04
Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
MzJava: An open source library for mass spectrometry data processing.
Horlacher, Oliver; Nikitin, Frederic; Alocci, Davide; Mariethoz, Julien; Müller, Markus; Lisacek, Frederique
2015-11-03
Mass spectrometry (MS) is a widely used and evolving technique for the high-throughput identification of molecules in biological samples. The need for sharing and reuse of code among bioinformaticians working with MS data prompted the design and implementation of MzJava, an open-source Java Application Programming Interface (API) for MS related data processing. MzJava provides data structures and algorithms for representing and processing mass spectra and their associated biological molecules, such as metabolites, glycans and peptides. MzJava includes functionality to perform mass calculation, peak processing (e.g. centroiding, filtering, transforming), spectrum alignment and clustering, protein digestion, fragmentation of peptides and glycans as well as scoring functions for spectrum-spectrum and peptide/glycan-spectrum matches. For data import and export MzJava implements readers and writers for commonly used data formats. For many classes support for the Hadoop MapReduce (hadoop.apache.org) and Apache Spark (spark.apache.org) frameworks for cluster computing was implemented. The library has been developed applying best practices of software engineering. To ensure that MzJava contains code that is correct and easy to use the library's API was carefully designed and thoroughly tested. MzJava is an open-source project distributed under the AGPL v3.0 licence. MzJava requires Java 1.7 or higher. Binaries, source code and documentation can be downloaded from http://mzjava.expasy.org and https://bitbucket.org/sib-pig/mzjava. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Coding/modulation trade-offs for Shuttle wideband data links
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.; Trumpis, B. D.
1974-01-01
This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.
Direct-Sequence Spread Spectrum System
1990-06-01
by directly modulating a conventional narrowband frequency-modulated (FM) carrier by a high rate digital code. The direct modulation is binary phase ...specification of the DSSS system will not be developed. The results of the evaluation phase of this research will be compared against theoretical...spread spectrum is called binary phase -shift keying 19 (BPSK). BPSK is a modulation in which a binary Ŕ" represents a 0-degree relative phase
Numerical Simulations of Dynamical Mass Transfer in Binaries
NASA Astrophysics Data System (ADS)
Motl, P. M.; Frank, J.; Tohline, J. E.
1999-05-01
We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
NASA Astrophysics Data System (ADS)
Bhattachryya, Arunava; Kumar Gayen, Dilip; Chattopadhyay, Tanay
2013-04-01
All-optical 4-bit binary to binary coded decimal (BCD) converter has been proposed and described, with the help of semiconductor optical amplifier (SOA)-assisted Sagnac interferometric switches in this manuscript. The paper describes all-optical conversion scheme using a set of all-optical switches. BCD is common in computer systems that display numeric values, especially in those consisting solely of digital logic with no microprocessor. In many personal computers, the basic input/output system (BIOS) keep the date and time in BCD format. The operations of the circuit are studied theoretically and analyzed through numerical simulations. The model accounts for the SOA small signal gain, line-width enhancement factor and carrier lifetime, the switching pulse energy and width, and the Sagnac loop asymmetry. By undertaking a detailed numerical simulation the influence of these key parameters on the metrics that determine the quality of switching is thoroughly investigated.
Thermal Timescale Mass Transfer In Binary Population Synthesis
NASA Astrophysics Data System (ADS)
Justham, S.; Kolb, U.
2004-07-01
Studies of binary evolution have, until recently, neglected thermal timescale mass transfer (TTMT). Recent work has suggested that this previously poorly studied area is crucial in the understanding of systems across the compact binary spectrum. We use the state-of-the-art binary population synthesis code BiSEPS (Willems and Kolb, 2002, MNRAS 337 1004-1016). However, the present treatment of TTMT is incomplete due to the nonlinear behaviour of stars in their departure from gravothermal `equilibrium'. Here we show work that should update the ultrafast stellar evolution algorithms within BiSEPS to make it the first pseudo-analytic code that can follow TTMT properly. We have generated fits to a set of over 300 Case B TTMT sequences with a range of intermediate-mass donors. These fits produce very good first approximations to both HR diagrams and mass-transfer rates (see figures 1 and 2), which we later hope to improve and extend. They are already a significant improvement over the previous fits.
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Can Binary Population Synthesis Models Be Tested With Hot Subdwarfs ?
NASA Astrophysics Data System (ADS)
Kopparapu, Ravi Kumar; Wade, R. A.; O'Shaughnessy, R.
2007-12-01
Models of binary star interactions have been successful in explaining the origin of field hot subdwarf (sdB) stars in short period systems. The hydrogen envelopes around these core He-burning stars are removed in a "common envelope" evolutionary phase. Reasonably clean samples of short-period sdB+WD or sdB+dM systems exist, that allow the common envelope ejection efficiency to be estimated for wider use in binary population synthesis (BPS) codes. About one-third of known sdB stars, however, are found in longer-period systems with a cool G or K star companion. These systems may have formed through Roche-lobe overflow (RLOF) mass transfer from the present sdB to its companion. They have received less attention, because the existing catalogues are believed to have severe selection biases against these systems, and because their long, slow orbits are difficult to measure. Are these known sdB+cool systems worth intense observational effort? That is, can they be used to make a valid and useful test of the RLOF process in BPS codes? We use the Binary Stellar Evolution (BSE) code of Hurley et al. (2002), mapping sets of initial binaries into present-day binaries that include sdBs, and distinguishing "observable" sdBs from "hidden" ones. We aim to find out whether (1) the existing catalogues of sdBs are sufficiently fair samples of the kinds of sdB binaries that theory predicts, to allow testing or refinement of RLOF models; or instead whether (2) large predicted hidden populations mandate the construction of new catalogues, perhaps using wide-field imaging surveys such as 2MASS, SDSS, and Galex. This work has been partially supported by NASA grant NNG05GE11G and NSF grants PHY 03-26281, PHY 06-00953 and PHY 06-53462. This work is also supported by the Center for Gravitational Wave Physics, which is supported by the National Science Foundation under cooperative agreement PHY 01-14375.
NASA Astrophysics Data System (ADS)
Parikh, A. S.; Wijnands, R.; Degenaar, N.; Ootes, L.; Page, D.
2018-05-01
We report on two new quiescent XMM-Newton observations (in addition to the earlier Swift/XRT and XMM-Newton coverage) of the cooling neutron star crust in the low-mass X-ray binary 1RXS J180408.9-342058. Its crust was heated during the ˜4.5 month accretion outburst of the source. From our quiescent observations, fitting the spectra with a neutron star atmosphere model, we found that the crust had cooled from ˜100 to ˜73 eV from ˜8 to ˜479 d after the end of its outburst. However, during the most recent observation, taken ˜860 d after the end of the outburst, we found that the crust appeared not to have cooled further. This suggested that the crust had returned to thermal equilibrium with the neutron star core. We model the quiescent thermal evolution with the theoretical crustal cooling code NSCool and find that the source requires a shallow heat source, in addition to the standard deep crustal heating processes, contributing ˜0.9 MeV per accreted nucleon during outburst to explain its observed temperature decay. Our high quality XMM-Newton data required an additional hard component to adequately fit the spectra. This slightly complicates our interpretation of the quiescent data of 1RXS J180408.9-342058. The origin of this component is not fully understood.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
Binary Sources and Binary Lenses in Microlensing Surveys of MACHOs
NASA Astrophysics Data System (ADS)
Petrovic, N.; Di Stefano, R.; Perna, R.
2003-12-01
Microlensing is an intriguing phenomenon which may yield information about the nature of dark matter. Early observational searches identified hundreds of microlensing light curves. The data set consisted mainly of point-lens light curves and binary-lens events in which the light curves exhibit caustic crossings. Very few mildly perturbed light curves were observed, although this latter type should constitute the majority of binary lens light curves. Di Stefano (2001) has suggested that the failure to take binary effects into account may have influenced the estimates of optical depth derived from microlensing surveys. The work we report on here is the first step in a systematic analysis of binary lenses and binary sources and their impact on the results of statistical microlensing surveys. In order to asses the problem, we ran Monte-Carlo simulations of various microlensing events involving binary stars (both as the source and as the lens). For each event with peak magnification > 1.34, we sampled the characteristic light curve and recorded the chi squared value when fitting the curve with a point lens model; we used this to asses the perturbation rate. We also recorded the parameters of each system, the maximum magnification, the times at which each light curve started and ended and the number of caustic crossings. We found that both the binarity of sources and the binarity of lenses increased the lensing rate. While the binarity of sources had a negligible effect on the perturbation rates of the light curves, the binarity of lenses had a notable effect. The combination of binary sources and binary lenses produces an observable rate of interesting events exhibiting multiple "repeats" in which the magnification rises above and dips below 1.34 several times. Finally, the binarity of lenses impacted both the durations of the events and the maximum magnifications. This work was supported in part by the SAO intern program (NSF grant AST-9731923) and NASA contracts NAS8-39073 and NAS8-38248 (CXC).
Formation of the first three gravitational-wave observations through isolated binary evolution
Stevenson, Simon; Vigna-Gómez, Alejandro; Mandel, Ilya; Barrett, Jim W.; Neijssel, Coenraad J.; Perkins, David; de Mink, Selma E.
2017-01-01
During its first four months of taking data, Advanced LIGO has detected gravitational waves from two binary black hole mergers, GW150914 and GW151226, along with the statistically less significant binary black hole merger candidate LVT151012. Here we use the rapid binary population synthesis code COMPAS to show that all three events can be explained by a single evolutionary channel—classical isolated binary evolution via mass transfer including a common envelope phase. We show all three events could have formed in low-metallicity environments (Z=0.001) from progenitor binaries with typical total masses ≳160M⊙, ≳60M⊙ and ≳90M⊙, for GW150914, GW151226 and LVT151012, respectively. PMID:28378739
BigWig and BigBed: enabling browsing of large distributed datasets.
Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D
2010-09-01
BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.
NASA Technical Reports Server (NTRS)
Brodbeck, C.; Bouanich, J.-P.; Nguyen, Van Thanh; Borysow, Aleksandra
1999-01-01
Collision-induced absorption (CIA) is the major source of the infrared opacity of dense planetary atmospheres which are composed of nonpolar molecules. Knowledge of CIA absorption spectra of H2-H2 pairs is important for modelling the atmospheres of planets and cold stars that are mainly composed of hydrogen. The spectra of hydrogen in the region of the second overtone at 0.8 microns have been recorded at temperatures of 298 and 77.5 K for gas densities ranging from 100 to 800 amagats. By extrapolation to zero density of the absorption coefficient measured every 10 cm(exp -1) in the spectral range from 11100 to 13800 cm(exp -1), we have determined the binary absorption coefficient. These extrapolated measurements are compared with calculations based on a model that was obtained by using simple computer codes and lineshape profiles. In view of the very weak absorption of the second overtone band, we find the agreement between results of the model and experiment to be reasonable.
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Analysis and Defense of Vulnerabilities in Binary Code
2008-09-29
language . We demonstrate our techniques by automatically generating input filters from vulnerable binary programs. vi Acknowledgments I thank my wife, family...21 2.2 The Vine Intermediate Language . . . . . . . . . . . . . . . . . . . . . . 21 ix 2.2.1 Normalized Memory...The Traditional Weakest Precondition Semantics . . . . . . . . . . . . . 44 3.2.1 The Guarded Command Language . . . . . . . . . . . . . . . . . 44
Distribution of compact object mergers around galaxies
NASA Astrophysics Data System (ADS)
Bulik, T.; Belczyński, K.; Zbijewski, W.
1999-09-01
Compact object mergers are one of the favoured models of gamma ray bursts (GRB). Using a binary population synthesis code we calculate properties of the population of compact object binaries; e.g. lifetimes and velocities. We then propagate them in galactic potentials and find their distribution in relation to the host.
A Biosequence-based Approach to Software Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oehmen, Christopher S.; Peterson, Elena S.; Phillips, Aaron R.
For many applications, it is desirable to have some process for recognizing when software binaries are closely related without relying on them to be identical or have identical segments. Some examples include monitoring utilization of high performance computing centers or service clouds, detecting freeware in licensed code, and enforcing application whitelists. But doing so in a dynamic environment is a nontrivial task because most approaches to software similarity require extensive and time-consuming analysis of a binary, or they fail to recognize executables that are similar but nonidentical. Presented herein is a novel biosequence-based method for quantifying similarity of executable binaries.more » Using this method, it is shown in an example application on large-scale multi-author codes that 1) the biosequence-based method has a statistical performance in recognizing and distinguishing between a collection of real-world high performance computing applications better than 90% of ideal; and 2) an example of using family tree analysis to tune identification for a code subfamily can achieve better than 99% of ideal performance.« less
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Advances in Black-Hole Mergers: Spins and Unequal Masses
NASA Technical Reports Server (NTRS)
Kelly, Bernard
2007-01-01
The last two years have seen incredible development in numerical relativity: from fractions of an orbit, evolutions of an equal-mass binary have reached multiple orbits, and convergent gravitational waveforms have been produced from several research groups and numerical codes. We are now able to move our attention from pure numerics to astrophysics, and address scenarios relevant to current and future gravitational-wave detectors.Over the last 12 months at NASA Goddard, we have extended the accuracy of our Hahn-Dol code, and used it to move toward these goals. We have achieved high-accuracy simulations of black-hole binaries of low initial eccentricity, with enough orbits of inspiral before merger to allow us to produce hybrid waveforms that reflect accurately the entire lifetime of the BH binary. We are extending this work, looking at the effects of unequal masses and spins.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
NASA Astrophysics Data System (ADS)
Yakut, Kadri
2015-08-01
We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.
Contamination of RR Lyrae stars from Binary Evolution Pulsators
NASA Astrophysics Data System (ADS)
Karczmarek, Paulina; Pietrzyński, Grzegorz; Belczyński, Krzysztof; Stępień, Kazimierz; Wiktorowicz, Grzegorz; Iłkiewicz, Krystian
2016-06-01
Binary Evolution Pulsator (BEP) is an extremely low-mass member of a binary system, which pulsates as a result of a former mass transfer to its companion. BEP mimics RR Lyrae-type pulsations but has different internal structure and evolution history. We present possible evolution channels to produce BEPs, and evaluate the contamination value, i.e. how many objects classified as RR Lyrae stars can be undetected BEPs. In this analysis we use population synthesis code StarTrack.
Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650
NASA Astrophysics Data System (ADS)
Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.
2018-04-01
We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
Error Correcting Codes and Related Designs
1990-09-30
Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group
ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems
Expósito, Roberto R.
2018-01-01
Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/. PMID:29608567
ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems.
González-Domínguez, Jorge; Expósito, Roberto R
2018-01-01
Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/.
Binary Black Hole Mergers, Gravitational Waves, and LISA
NASA Astrophysics Data System (ADS)
Centrella, Joan; Baker, J.; Boggs, W.; Kelly, B.; McWilliams, S.; van Meter, J.
2007-12-01
The final merger of comparable mass binary black holes is expected to be the strongest source of gravitational waves for LISA. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. We will present the results of new simulations of black hole mergers with unequal masses and spins, focusing on the gravitational waves emitted and the accompanying astrophysical "kicks.” The magnitude of these kicks has bearing on the production and growth of supermassive blackholes during the epoch of structure formation, and on the retention of black holes in stellar clusters. This work was supported by NASA grant 06-BEFS06-19, and the simulations were carried out using Project Columbia at the NASA Advanced Supercomputing Division (Ames Research Center) and at the NASA Center for Computational Sciences (Goddard Space Flight Center).
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart
2011-08-05
The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.
2014-08-01
We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together withmore » a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.« less
Discovery of a Probable BH-HMXB and Cyg X-1 Progenitor System
NASA Astrophysics Data System (ADS)
Grindlay, Jonathan E.; Gomez, Sebastian; Hong, Jaesub; Zhang, Shuo; Hailey, Charles; Mori, Kaya; Tomsick, John
2017-08-01
We report the discovery of a probable black hole High Mass X-ray Binary (BH-HMXB), a 5.3d single line spectroscopic binary (SB1) HD96670 in the Carina OB association. We initiated a search for such systems for which the O star primary was still on the main sequence, in stark contrast to Cyg X-1 with its evolved supergiant O star companion, since such systems must be ~10-30 times more numerous given their longer lifetimes. HD96670 had been found to be a SB1 with binary period ~5.5d and mass function ~0.125Msun. With a ~150ksec NuSTAR observation of HD96670 over 3 segments, we found a significant detection of a variable source best fit with a PL spectrum with photon index between 2.4 and 2.6 for the brightest vs. faintest observations. Weak 6.4 - 6.7 keV emission was also detected. We conducted extensive optical photometry and spectroscopy to better measure the binary system parameters and have fit the the combined data with an ellipsoidal modulation code (Wilson and Devinney) to find that the binary companion is best fit by a ~4.5 Msun BH accreting from the weak wind primary O star with luminosity Lx ~3 x 10^32 erg/s, which cannot be due to a colliding wind or intrinsic Ostar emission. . A B4V or B5V main sequence star companion can be ruled out by the very low accretion luminosity and lack of colliding wind expected. Full details, including the direct measurement of a triple companion B1V star previously reported (Sanna et al 2014) for HD96670, will appear in two forthcoming papers to be summarized in this talk.
X-ray backscatter radiography with lower open fraction coded masks
NASA Astrophysics Data System (ADS)
Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David
2017-09-01
Single sided radiographic imaging would find great utility for medical, aerospace and security applications. While coded apertures can be used to form such an image from backscattered X-rays they suffer from near field limitations that introduce noise. Several theoretical studies have indicated that for an extended source the images signal to noise ratio may be optimised by using a low open fraction (<0.5) mask. However, few experimental results have been published for such low open fraction patterns and details of their formulation are often unavailable or are ambiguous. In this paper we address this process for two types of low open fraction mask, the dilute URA and the Singer set array. For the dilute URA the procedure for producing multiple 2D array patterns from given 1D binary sequences (Barker codes) is explained. Their point spread functions are calculated and their imaging properties are critically reviewed. These results are then compared to those from the Singer set and experimental exposures are presented for both type of pattern; their prospects for near field imaging are discussed.
Brenes-Camacho, Gilbert
2013-01-01
The article's main goal is to study the relationship between subjective perception of own economic situation and objective measures of economic well-being -sources of income, home ownership, education level, and informal family transfers- among the elderly in two Latin American countries: Mexico and Costa Rica. The data come from two surveys about ageing: CRELES in Costa Rica and MHAS in Mexico. The most important dependent variables is derived from the answer to the question "How would you rate your current economic situation? in Costa Rica, and "Would you say that your current economic situation is…?" in Mexico. For both surveys, the answers were coded as a binary variable; code 0 represents the Excellent, Very Good, and Good categories, while the code 1 represents the Fair or Bad categories. The analysis finds that retirement pension income is an important factor for defining self-rated economic situation in both countries. In Costa Rica, spouse's income and home ownership are relevant predictors for the perception of well-being, while in Mexico, receiving transfer income is associated with this perception.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong
Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less
Getting Astrophysical Information from LISA Data
NASA Technical Reports Server (NTRS)
Stebbins, R. T.; Bender, P. L.; Folkner, W. M.
1997-01-01
Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
The IntAct molecular interaction database in 2012
Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning
2012-01-01
IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220
Supernova and Prompt Gravitational-wave Precursors to LIGO Gravitational-wave Sources and Short GRBs
NASA Astrophysics Data System (ADS)
Michaely, Erez; Perets, Hagai B.
2018-03-01
Binary black holes (BBHs) and binary neutron stars (BNSs) mergers have been recently detected through their gravitational-wave (GW) emission. A post-merger electromagnetic counterpart for the first BNS merger has been detected from seconds up to weeks after the merger. While such post-merger electromagnetic counterparts had been anticipated theoretically, far fewer electromagnetic precursors to GW sources have been proposed, and non have been observed. Here we show that a fraction of a few ×10‑3 (for a standard model) GW sources and short gamma-ray bursts (GRBs) observed by the Laser Interferometer Gravitational-wave Observatory (LIGO) could have been preceded by supernova (SN) explosions from years up to decades before the mergers. The GW sources are produced following the preceding binary evolution, the supernovae involved in the final formation of the GW source progenitors, and the natal kicks that likely accompany them. Together, these determine the orbits of surviving binaries, and hence the delay-time between the birth of the compact binary and its final merger through GW emission. We use data from binary evolution population-synthesis models to show that the delay-time distribution has a non-negligible tail of ultra-short delay-times between 1 and 100 years, thereby giving rise to potentially observable supernovae precursors to GW sources. Moreover, future LISA/DECIGO GW space-detectors will enable the detection of GW inspirals in the pre-merger stage weeks to decades before the final merger. These sources could therefore produce a unique type of promptly appearing LISA/DECIGO GW sources accompanied by coincident supernovae. The archival (and/or direct) detection of precursor (coincident) SNe with GW and/or short GRBs will provide unprecedented characterizations of the merging binaries, and their prior evolution through supernovae and natal kicks, otherwise inaccessible through other means.
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Discrete X-Ray Source Populations and Star-Formation History in Nearby Galaxies
NASA Technical Reports Server (NTRS)
Zezas, Andreas
2004-01-01
This program aims in understanding the connection between the discrete X-ray source populations observed in nearby galaxies and the history of star-formation in these galaxies. The ultimate goal is to use this knowledge in order to constrain X-ray binary evolution channels. For this reason although the program is primarily observational it has a significant modeling component. During the first year of this study we focused on the definition of a pilot sample of galaxies with well know star-formation histories. A small part of this sample has already been observed and we performed initial analysis of the data. However, the majority of the objects in our sample either have not been observed at all, or the detection limit of the existing observations is not low enough to probe the bulk of their young X-ray binary populations. For this reason we successfully proposed for additional Chandra observations of three targets in Cycle-5. These observations are currently being performed. The analysis of the (limited) archival data for this sample indicated that the X-ray luminosity functions (XLF) of the discrete sources in these galaxies may not have the same shape as is widely suggested. However, any solid conclusions are hampered by the small number of detected sources. For this reason during the second year of this study, we will try to extend the sample in order to include more objects in each evolutionary stage. In addition we are completing the analysis of the Chandra monitoring observations of the Antennae galaxies. The results from this work, apart from important clues on the nature of the most luminous sources (Ultra-luminous X-ray sources; ULXs) provide evidence that source spectral and/or temporal variability does not significantly affect the shape of their X-ray luminosity functions. This is particularly important for comparisons between the XLFs of different galaxies and comparisons with predictions from theoretical models. Results from this work have been presented in several conferences. Refereed journal papers presenting these conclusions are currently in preparation. An important part of this study is the Chandra survey of the Small Magellanic Cloud, our second nearest star- forming galaxy. So far we have been awarded 5 Chandra observations of the central youngest part of the galaxy. These observations will help to study the very faint end of the young X-ray binary populations which is not possible to probe in more distant objects. Results from this study have been presented in several conferences and two papers are in preparation. In addition during year-2 we are planning of undertaking the task of identifying optical counterparts to the X-ray sources, which will help us to isolate interlopers (sources not associated with the SMC) and classify the X-ray binaries which are found to be associated with the SMC. In the theoretical front, the Star-Track X-ray binary population synthesis code which will be used for the modeling of the X-ray binary populations (led by co-I V. Kalogera and C. Belczynski), is complete. A first test using the XLF of the star-forming galaxy NGC-1569 showed remarkable agreement between the observed and the modeled XLF. These results are presented in an ApJ. Letters paper (Belczynski et al, 2004, 601, 147). During year-2 of this study we are planning of performing a parameter study in order to investigate which parameters are most important for the shape of the XLF. In addition we will perform comparisons with observations of other galaxies from our sample as they become available.
The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System
NASA Astrophysics Data System (ADS)
Suen, Wai-Mo
1998-04-01
NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.
BIT BY BIT: A Game Simulating Natural Language Processing in Computers
ERIC Educational Resources Information Center
Kato, Taichi; Arakawa, Chuichi
2008-01-01
BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…
UNDERSTANDING X-RAY STARS:. The Discovery of Binary X-ray Sources
NASA Astrophysics Data System (ADS)
Schreier, E. J.; Tananbaum, H.
2000-09-01
The discovery of binary X-ray sources with UHURU introduced many new concepts to astronomy. It provided the canonical model which explained X-ray emission from a large class of galactic X-ray sources: it confirmed the existence of collapsed objects as the source of intense X-ray emission; showed that such collapsed objects existed in binary systems, with mass accretion as the energy source for the X-ray emission; and provided compelling evidence for the existence of black holes. This model also provided the basis for explaining the power source of AGNs and QSOs. The process of discovery and interpretation also established X-ray astronomy as an essential sub-discipline of astronomy, beginning its incorporation into the mainstream of astronomy.
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
The Base 32 Method: An Improved Method for Coding Sibling Constellations.
ERIC Educational Resources Information Center
Perfetti, Lawrence J. Carpenter
1990-01-01
Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)
Dynamic fisheye grids for binary black hole simulations
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Noble, Scott C.
2014-03-01
We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.
Emission-line diagnostics of nearby H II regions including interacting binary populations
NASA Astrophysics Data System (ADS)
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-06-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Very high energy gamma-ray binary stars.
Lamb, R C; Weekes, T C
1987-12-11
One of the major astronomical discoveries of the last two decades was the detection of luminous x-ray binary star systems in which gravitational energy from accretion is released by the emission of x-ray photons, which have energies in the range of 0.1 to 10 kiloelectron volts. Recent observations have shown that some of these binary sources also emit photons in the energy range of 10(12) electron volts and above. Such sources contain a rotating neutron star that is accreting matter from a companion. Techniques to detect such radiation are ground-based, simple, and inexpensive. Four binary sources (Hercules X-1, 4U0115+63, Vela X-1, and Cygnus X-3) have been observed by at least two independent groups. Although the discovery of such very high energy "gamma-ray binaries" was not theoretically anticipated, models have now been proposed that attempt to explain the behavior of one or more of the sources. The implications of these observations is that a significant portion of the more energetic cosmic rays observed on Earth may arise from the action of similar sources within the galaxy during the past few million years.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
2011-03-01
Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans. Commun...vol. 51, pp. 48-51, Jan. 2003. [99] C. Ding, M. Golin, and T. Klφve, “Meeting the Welch and Karystinos-Pados bounds on DS - CDMA binary signature sets...Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [100] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA
2015-07-09
49, pp. 873-885, Apr. 2003. [23] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary...bounds on DS - CDMA binary signature sets,” Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [25] V. P. Ipatov, “On the Karystinos-Pados...bounds and optimal binary DS - CDMA signature ensembles,” IEEE Commun. Letters, vol. 8, pp. 81-83, Feb. 2004. [26] G. N. Karystinos and D. A. Pados
The Evolution of Compact Binary Star Systems.
Postnov, Konstantin A; Yungelson, Lev R
2014-01-01
We review the formation and evolution of compact binary stars consisting of white dwarfs (WDs), neutron stars (NSs), and black holes (BHs). Mergings of compact-star binaries are expected to be the most important sources for forthcoming gravitational-wave (GW) astronomy. In the first part of the review, we discuss observational manifestations of close binaries with NS and/or BH components and their merger rate, crucial points in the formation and evolution of compact stars in binary systems, including the treatment of the natal kicks, which NSs and BHs acquire during the core collapse of massive stars and the common envelope phase of binary evolution, which are most relevant to the merging rates of NS-NS, NS-BH and BH-BH binaries. The second part of the review is devoted mainly to the formation and evolution of binary WDs and their observational manifestations, including their role as progenitors of cosmologically-important thermonuclear SN Ia. We also consider AM CVn-stars, which are thought to be the best verification binary GW sources for future low-frequency GW space interferometers.
The Coding of Biological Information: From Nucleotide Sequence to Protein Recognition
NASA Astrophysics Data System (ADS)
Štambuk, Nikola
The paper reviews the classic results of Swanson, Dayhoff, Grantham, Blalock and Root-Bernstein, which link genetic code nucleotide patterns to the protein structure, evolution and molecular recognition. Symbolic representation of the binary addresses defining particular nucleotide and amino acid properties is discussed, with consideration of: structure and metric of the code, direct correspondence between amino acid and nucleotide information, and molecular recognition of the interacting protein motifs coded by the complementary DNA and RNA strands.
PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns
ERIC Educational Resources Information Center
Paterson, J. H.; Cheng, K. F.; Haddow, J.
2009-01-01
PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…
NASA Technical Reports Server (NTRS)
Robinson-Saba, J. L.
1983-01-01
Observations of the binary X-ray source Circinus X-1 provide samples of a range of spectral and temporal behavior whose variety is thought to reflect a broad continuum of accretion conditions in an eccentric binary system. The data support an identification of three or more X-ray spectral components, probably associated with distinct emission regions.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
PARALLAX AND ORBITAL EFFECTS IN ASTROMETRIC MICROLENSING WITH BINARY SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucita, A. A.; Paolis, F. De; Ingrosso, G.
2016-06-01
In gravitational microlensing, binary systems may act as lenses or sources. Identifying lens binarity is generally easy, in particular in events characterized by caustic crossing since the resulting light curve exhibits strong deviations from a smooth single-lensing light curve. In contrast, light curves with minor deviations from a Paczyński behavior do not allow one to identify the source binarity. A consequence of gravitational microlensing is the shift of the position of the multiple image centroid with respect to the source star location — the so-called astrometric microlensing signal. When the astrometric signal is considered, the presence of a binary sourcemore » manifests with a path that largely differs from that expected for single source events. Here, we investigate the astrometric signatures of binary sources taking into account their orbital motion and the parallax effect due to the Earth’s motion, which turn out not to be negligible in most cases. We also show that considering the above-mentioned effects is important in the analysis of astrometric data in order to correctly estimate the lens-event parameters.« less
Classification of X-ray sources in the direction of M31
NASA Astrophysics Data System (ADS)
Vasilopoulos, G.; Hatzidimitriou, D.; Pietsch, W.
2012-01-01
M31 is our nearest spiral galaxy, at a distance of 780 kpc. Identification of X-ray sources in nearby galaxies is important for interpreting the properties of more distant ones, mainly because we can classify nearby sources using both X-ray and optical data, while more distant ones via X-rays alone. The XMM-Newton Large Project for M31 has produced an abundant sample of about 1900 X-ray sources in the direction of M31. Most of them remain elusive, giving us little signs of their origin. Our goal is to classify these sources using criteria based on properties of already identified ones. In particular we construct candidate lists of high mass X-ray binaries, low mass X-ray binaries, X-ray binaries correlated with globular clusters and AGN based on their X-ray emission and the properties of their optical counterparts, if any. Our main methodology consists of identifying particular loci of X-ray sources on X-ray hardness ratio diagrams and the color magnitude diagrams of their optical counterparts. Finally, we examined the X-ray luminosity function of the X-ray binaries populations.
EGG: Empirical Galaxy Generator
NASA Astrophysics Data System (ADS)
Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; MichaÅowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.
2018-04-01
The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Wong, Wing Chung; Kim, Dewey; Carter, Hannah; Diekhans, Mark; Ryan, Michael C; Karchin, Rachel
2011-08-01
Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
esATAC: An Easy-to-use Systematic pipeline for ATAC-seq data analysis.
Wei, Zheng; Zhang, Wei; Fang, Huan; Li, Yanda; Wang, Xiaowo
2018-03-07
ATAC-seq is rapidly emerging as one of the major experimental approaches to probe chromatin accessibility genome-wide. Here, we present "esATAC", a highly integrated easy-to-use R/Bioconductor package, for systematic ATAC-seq data analysis. It covers essential steps for full analyzing procedure, including raw data processing, quality control and downstream statistical analysis such as peak calling, enrichment analysis and transcription factor footprinting. esATAC supports one command line execution for preset pipelines, and provides flexible interfaces for building customized pipelines. esATAC package is open source under the GPL-3.0 license. It is implemented in R and C ++. Source code and binaries for Linux, MAC OS X and Windows are available through Bioconductor https://www.bioconductor.org/packages/release/bioc/html/esATAC.html). xwwang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.
Detection and period measurements of GX1+4 at hard x ray energies with the SIGMA telescope
NASA Technical Reports Server (NTRS)
Laurent, PH.; Salotti, L.; Lebrun, F.; Paul, J.; Denis, M.; Barret, D.; Jourdain, E.; Roques, J. P.; Churazov, E.; Gilfanov, M.
1992-01-01
The galactic Low Mass X ray Binary GX1+4 was detected by the coded aperture hard X ray gamma ray SIGMA telescope during the Feb. to April 1991 observations of the galactic center regions. The source, whose emission varied during the survey of a factor greater than 40 pct., reached a maximum luminosity in the 40 to 140 energy range of 1.03 x 10(exp 37) erg/s (D = 8.5 kpc), thus approaching the emission level of the 1970 to 1980 high state. Two minute flux pulsations were detected on Mar. 22 and on Mar. 31 and Apr. 1. Comparison with the last period measurements shows that the current spin-down phase of GX1+4 is ending. Concerning the proposed association of this source with the galactic center 511 keV annihilation emission, upper limits were derived.
Software support for SBGN maps: SBGN-ML and LibSBGN.
van Iersel, Martijn P; Villéger, Alice C; Czauderna, Tobias; Boyd, Sarah E; Bergmann, Frank T; Luna, Augustin; Demir, Emek; Sorokin, Anatoly; Dogrusoz, Ugur; Matsuoka, Yukiko; Funahashi, Akira; Aladjem, Mirit I; Mi, Huaiyu; Moodie, Stuart L; Kitano, Hiroaki; Le Novère, Nicolas; Schreiber, Falk
2012-08-01
LibSBGN is a software library for reading, writing and manipulating Systems Biology Graphical Notation (SBGN) maps stored using the recently developed SBGN-ML file format. The library (available in C++ and Java) makes it easy for developers to add SBGN support to their tools, whereas the file format facilitates the exchange of maps between compatible software applications. The library also supports validation of maps, which simplifies the task of ensuring compliance with the detailed SBGN specifications. With this effort we hope to increase the adoption of SBGN in bioinformatics tools, ultimately enabling more researchers to visualize biological knowledge in a precise and unambiguous manner. Milestone 2 was released in December 2011. Source code, example files and binaries are freely available under the terms of either the LGPL v2.1+ or Apache v2.0 open source licenses from http://libsbgn.sourceforge.net. sbgn-libsbgn@lists.sourceforge.net.
Improved magnetic encoding device and method for making the same. [Patent application
Fox, R.J.
A magnetic encoding device and method for making the same are provided for use as magnetic storage media in identification control applications that give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications. 7 figures.
Method for making an improved magnetic encoding device
Fox, Richard J.
1981-01-01
A magnetic encoding device and method for making the same are provided for use as magnetic storage mediums in identification control applications which give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications.
Chandra Observation of Luminous and Ultraluminous X-ray Binaries in M101
NASA Technical Reports Server (NTRS)
Mukai, K.; Pence, W. D.; Snowden, S. L.; Kuntz, K. D.; White, Nicholas E. (Technical Monitor)
2002-01-01
X-ray binaries in the Milky Way are among the brightest objects on the X-ray sky. With the increasing sensitivity of recent missions, it is now possible to study X-ray binaries in nearby galaxies. We present data on six ultraluminous binaries in the nearby spiral galaxy, M101, obtained with Chandra ACIS-S. Of these, five appear to be similar to ultraluminous sources in other galaxies, while the brightest source, P098, shows some unique characteristics. We present our interpretation of the data in terms of an optically thick outflow, and discuss implications.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
NASA Astrophysics Data System (ADS)
Götberg, Y.; de Mink, S. E.; Groh, J. H.
2017-11-01
Understanding ionizing fluxes of stellar populations is crucial for various astrophysical problems including the epoch of reionization. Short-lived massive stars are generally considered as the main stellar sources. We examine the potential role of less massive stars that lose their envelope through interaction with a binary companion. Here, we focus on the role of metallicity (Z). For this purpose we used the evolutionary code MESA and created tailored atmosphere models with the radiative transfer code CMFGEN. We show that typical progenitors, with initial masses of 12 M⊙, produce hot and compact stars ( 4 M⊙, 60-80 kK, 1 R⊙). These stripped stars copiously produce ionizing photons, emitting 60-85% and 30-60% of their energy as HI and HeI ionizing radiation, for Z = 0.0001-0.02, respectively. Their output is comparable to what massive stars emit during their Wolf-Rayet phase, if we account for their longer lifetimes and the favorable slope of the initial mass function. Their relative importance for reionization may be further favored since they emit their photons with a time delay ( 20 Myr after birth in our fiducial model). This allows time for the dispersal of the birth clouds, allowing the ionizing photons to escape into the intergalactic medium. At low Z, we find that Roche stripping fails to fully remove the H-rich envelope, because of the reduced opacity in the subsurface layers. This is in sharp contrast with the assumption of complete stripping that is made in rapid population synthesis simulations, which are widely used to simulate the binary progenitors of supernovae and gravitational waves. Finally, we discuss the urgency to increase the observed sample of stripped stars to test these models and we discuss how our predictions can help to design efficient observational campaigns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Advanced LIGO constraints on neutron star mergers and r-process sites
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...
2017-02-20
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Castro-Chavez, Fernando
2012-01-01
Background Three binary representations of the genetic code according to the ancient I Ching of Fu-Xi will be presented, depending on their defragging capabilities by pairing based on three biochemical properties of the nucleic acids: H-bonds, Purine/Pyrimidine rings, and the Keto-enol/Amino-imino tautomerism, yielding the last pair a 32/32 single-strand self-annealed genetic code and I Ching tables. Methods Our working tool is the ancient binary I Ching's resulting genetic code chromosomes defragged by vertical and by horizontal pairing, reverse engineered into non-binaries of 2D rotating 4×4×4 circles and 8×8 squares and into one 3D 100% symmetrical 16×4 tetrahedron coupled to a functional tetrahedron with apical signaling and central hydrophobicity (codon formula: 4[1(1)+1(3)+1(4)+4(2)]; 5:5, 6:6 in man) forming a stella octangula, and compared to Nirenberg's 16×4 codon table (1965) pairing the first two nucleotides of the 64 codons in axis y. Results One horizontal and one vertical defragging had the start Met at the center. Two, both horizontal and vertical pairings produced two pairs of 2×8×4 genetic code chromosomes naturally arranged (M and I), rearranged by semi-introversion of central purines or pyrimidines (M' and I') and by clustering hydrophobic amino acids; their quasi-identity was disrupted by amino acids with odd codons (Met and Tyr pairing to Ile and TGA Stop); in all instances, the 64-grid 90° rotational ability was restored. Conclusions We defragged three I Ching representations of the genetic code while emphasizing Nirenberg's historical finding. The synthetic genetic code chromosomes obtained reflect the protective strategy of enzymes with a similar function, having both humans and mammals a biased G-C dominance of three H-bonds in the third nucleotide of their most used codons per amino acid, as seen in one chromosome of the i, M and M' genetic codes, while a two H-bond A-T dominance was found in their complementary chromosome, as seen in invertebrates and plants. The reverse engineering of chromosome I' into 2D rotating circles and squares was undertaken, yielding a 100% symmetrical 3D geometry which was coupled to a previously obtained genetic code tetrahedron in order to differentiate the start methionine from the methionine that is acting as a codifying non-start codon. PMID:23431415
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Holographic implementation of a binary associative memory for improved recognition
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.
1998-03-01
Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.
Documentation for the machine-readable character coded version of the SKYMAP catalogue
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1981-01-01
The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe
2018-06-01
The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.
Advanced complex trait analysis.
Gray, A; Stewart, I; Tenesa, A
2012-12-01
The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.
Evidence for a planetary mass third body orbiting the binary star KIC 5095269
NASA Astrophysics Data System (ADS)
Getley, A. K.; Carter, B.; King, R.; O'Toole, S.
2017-07-01
In this paper, we report the evidence for a planetary mass body orbiting the close binary star KIC 5095269. This detection arose from a search for eclipse timing variations amongst the more than 2000 eclipsing binaries observed by Kepler. Light curve and periodic eclipse time variations have been analysed using systemic and a custom Binary Eclipse Timings code based on the Transit Analysis Package which indicates a 7.70 ± 0.08MJup object orbiting every 237.7 ± 0.1 d around a 1.2 M⊙ primary and a 0.51 M⊙ secondary in an 18.6 d orbit. A dynamical integration over 107 yr suggests a stable orbital configuration. Radial velocity observations are recommended to confirm the properties of the binary star components and the planetary mass of the companion.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Planet formation: is it good or bad to have a stellar companion?
NASA Astrophysics Data System (ADS)
Marzari, F.; Thebault, P.; Scholl, H.
2010-04-01
Planet formation in binary star systems is a complex issue due to the gravitational perturbations of the companion star. One of the crucial steps of the core-accretion model is planetesimal accretion into large protoplanets which finally coalesce into planets. In a planetesimal swarm surrounding the primary star, the average mutual impact velocity determines if larger bodies form or if the population is grinded down to dust, halting the planet formation process. This velocity is strongly influenced by the companion gravitational pull and by gas drag. The combined effect of these two forces may act in favour of or against planet formation, setting a lower or equal probability of the existence of extrasolar planets around single or binary stars. Planetesimal accretion in binaries has been studied so far with two different approaches. N-body codes based on the assumption that the disk is axisymmetric are very cost-effective since they allow the study of the mutual relative velocity with limited CPU usage. A large amount of planetesimal trajectories can be computed making it possible to outline the regions around the star where planet formation is possible. The main limitation of the N-body codes is the axisymmetric assumption. The companion perturbations affect not only the planetesimal orbits, but also the gaseous disk, by forcing spiral density waves. In addition, the overall shape of the disk changes from circular to elliptic. Hybrid codes have been recently developed which solve the equations for the disk with a hydrodynamical grid code and use the computed gas density and velocity vector to calculate an accurate value of the gas drag force on the planetesimals. These codes are more complex and may compute the trajectories of only a limited number of planetesimals.
Massive Binary Black Holes in the Cosmic Landscape
NASA Astrophysics Data System (ADS)
Colpi, Monica; Dotti, Massimo
2011-02-01
Binary black holes occupy a special place in our quest for understanding the evolution of galaxies along cosmic history. If massive black holes grow at the center of (pre-)galactic structures that experience a sequence of merger episodes, then dual black holes form as inescapable outcome of galaxy assembly, and can in principle be detected as powerful dual quasars. But, if the black holes reach coalescence, during their inspiral inside the galaxy remnant, then they become the loudest sources of gravitational waves ever in the universe. The Laser Interferometer Space Antenna is being developed to reveal these waves that carry information on the mass and spin of these binary black holes out to very large look-back times. Nature seems to provide a pathway for the formation of these exotic binaries, and a number of key questions need to be addressed: How do massive black holes pair in a merger? Depending on the properties of the underlying galaxies, do black holes always form a close Keplerian binary? If a binary forms, does hardening proceed down to the domain controlled by gravitational wave back reaction? What is the role played by gas and/or stars in braking the black holes, and on which timescale does coalescence occur? Can the black holes accrete on flight and shine during their pathway to coalescence? After outlining key observational facts on dual/binary black holes, we review the progress made in tracing their dynamics in the habitat of a gas-rich merger down to the smallest scales ever probed with the help of powerful numerical simulations. N-Body/hydrodynamical codes have proven to be vital tools for studying their evolution, and progress in this field is expected to grow rapidly in the effort to describe, in full realism, the physics of stars and gas around the black holes, starting from the cosmological large scale of a merger. If detected in the new window provided by the upcoming gravitational wave experiments, binary black holes will provide a deep view into the process of hierarchical clustering which is at the heart of the current paradigm of galaxy formation. They will also be exquisite probes for testing General Relativity, as the theory of gravity. The waveforms emitted during the inspiral, coalescence and ring-down phase carry in their shape the sign of a dynamically evolving space-time and the proof of the existence of an horizon.
Searching for Unresolved Binary Brown Dwarfs
NASA Astrophysics Data System (ADS)
Albretsen, Jacob; Stephens, Denise
2007-10-01
There are currently L and T brown dwarfs (BDs) with errors in their classification of +/- 1 to 2 spectra types. Metallicity and gravitational differences have accounted for some of these discrepancies, and recent studies have shown unresolved binary BDs may offer some explanation as well. However limitations in technology and resources often make it difficult to clearly resolve an object that may be binary in nature. Stephens and Noll (2006) identified statistically strong binary source candidates from Hubble Space Telescope (HST) images of Trans-Neptunian Objects (TNOs) that were apparently unresolved using model point-spread functions for single and binary sources. The HST archive contains numerous observations of BDs using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) that have never been rigorously analyzed for binary properties. Using methods developed by Stephens and Noll (2006), BD observations from the HST data archive are being analyzed for possible unresolved binaries. Preliminary results will be presented. This technique will identify potential candidates for future observations to determine orbital information.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-04-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Accuracy of inference on the physics of binary evolution from gravitational-wave observations
NASA Astrophysics Data System (ADS)
Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya
2018-07-01
The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.
Synthetic Survey of the Kepler Field
NASA Astrophysics Data System (ADS)
Wells, Mark; Prša, Andrej
2018-01-01
In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.
Common Envelope Light Curves. I. Grid-code Module Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M {sub ⊙} red giant branch star interacts with a 0.6 M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less
NASA Astrophysics Data System (ADS)
van den Berg, Maureen C.
2015-08-01
The binaries in the core of a star cluster are the energy source that prevents the cluster from experiencing core collapse. To model the dynamical evolution of a cluster, it is important to have constraints on the primordial binary content. X-ray observations of old star clusters are very efficient in detecting the close interacting binaries among the cluster members. The X-ray sources in star clusters are a mix of binaries that were dynamically formed and primordial binaries. In massive, dense star clusters, dynamical encounters play an important role in shaping the properties and numbers of the binaries. In contrast, in the low-density clusters the impact of dynamical encounters is presumed to be very small, and the close binaries detected in X-rays represent a primordial population. The lowest density globular clusters have current masses and central densities similar to those of the oldest open clusters in our Milky Way. I will discuss the results of studies with the Chandra X-ray Observatory that have nevertheless revealed a clear dichotomy: far fewer (if any at all) X-ray sources are detected in the central regions of the low-density globular clusters compared to the number of secure cluster members that have been detected in old open clusters (above a limiting X-ray luminosity of typically 4e30 erg/s). The low stellar encounter rates imply that dynamical destruction of binaries can be ignored at present, therefore an explanation must be sought elsewhere. I will discuss several factors that can shed light on the implied differences between the primordial close binary populations in the two types of star clusters.
Deep classification hashing for person re-identification
NASA Astrophysics Data System (ADS)
Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang
2018-04-01
As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.
Getting Started in Classroom Computing.
ERIC Educational Resources Information Center
Ahl, David H.
Written for secondary students, this booklet provides an introduction to several computer-related concepts through a set of six classroom games, most of which can be played with little more than a sheet of paper and a pencil. The games are: 1) SECRET CODES--introduction to binary coding, punched cards, and paper tape; 2) GUESS--efficient methods…
2009-09-01
184 LEGALRESR Recode-Tab:[8] State of legal voting res 72 LITHO * Litho code 473 NOFVAPA* 42a. [42a] Not used FVAP tele:Did not know 363...471 INRECNO Master SCS ID number 472 LITHO Litho code 473 QCOMPF Binary variable indicating if case compl 474 QCOMPN [QCOMPN] Questions
Factors Affecting Code Status in a University Hospital Intensive Care Unit
ERIC Educational Resources Information Center
Van Scoy, Lauren Jodi; Sherman, Michael
2013-01-01
The authors collected data on diagnosis, hospital course, and end-of-life preparedness in patients who died in the intensive care unit (ICU) with "full code" status (defined as receiving cardiopulmonary resuscitation), compared with those who didn't. Differences were analyzed using binary and stepwise logistic regression. They found no…
Alternancia entre el estado de emisión de Rayos-X y Pulsar en Sistemas Binarios Interactuantes
NASA Astrophysics Data System (ADS)
De Vito, M. A.; Benvenuto, O. G.; Horvath, J. E.
2015-08-01
Redbacks belong to the family of binary systems in which one of the components is a pulsar. Recent observations show redbacks that have switched their state from pulsar - low mass companion (where the accretion of material over the pulsar has ceased) to low mass X-ray binary system (where emission is produced by the mass accretion on the pulsar), or inversely. The irradiation effect included in our models leads to cyclic mass transfer episodes, which allow close binary systems to switch between one state to other. We apply our results to the case of PSR J1723-2837, and discuss the need to include new ingredients in our code of binary evolution to describe the observed state transitions.
NASA Technical Reports Server (NTRS)
Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.
2015-01-01
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A
2016-07-15
While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans
2016-01-01
Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686
Outflow and Infall in Star-forming Region L1221
NASA Astrophysics Data System (ADS)
Lee, Chin-Fei; Ho, Paul T. P.
2005-10-01
We have mapped the 3.3 mm continuum, CO, HCO+, N2H+, and CS emission around a nearby Class I source, IRAS 22266+6845, in the L1221 cometary dark cloud. L1221 is a complicated star-forming region. It hosts three infrared sources: a close binary consisting of an east source and a west source around the IRAS source position and a southeast source ~45" to the southeast (T. Bourke 2004, private communication). The east source is identified as the IRAS source. Continuum emission is seen around the east and southeast sources, probably tracing the dust around them. No continuum emission is seen toward the west source, probably indicating that there is not much dust there. An east-west molecular outflow is seen in CO, HCO+, and CS originated from around the binary. It is bipolar with an east lobe and a west lobe, both appearing as a wide-opening outflow shell originated from around the binary. It is likely powered by the east source, which shows a southeast extension along the outflow axis in the K' image. A ringlike envelope is seen in N2H+ around the binary surrounding the outflow waist. It is tilted with the major axis perpendicular to the outflow axis. The kinematics is well reproduced by a thin-disk model with both infall and rotation, and a column density peak in a ring. The ringlike envelope is not rotationally supported, as the rotation velocity is smaller than the infall velocity.
Multifunction audio digitizer. [producing direct delta and pulse code modulation
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr. (Inventor)
1974-01-01
An illustrative embodiment of the invention includes apparatus which simultaneously produces both direct delta modulation and pulse code modulation. An input signal, after amplification, is supplied to a window comparator which supplies a polarity control signal to gate the output of a clock to the appropriate input of a binary up-down counter. The control signals provide direct delta modulation while the up-down counter output provides pulse code modulation.
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
Gravitational Waves from Coalescing Binary Black Holes: Theoretical and Experimental Challenges
Damour, Thibault
2018-05-22
A network of ground-based interferometric gravitational wave detectors (LIGO/VIRGO/GEO/...) is currently taking data near its planned sensitivity. Coalescing black hole binaries are among the most promising, and most exciting, gravitational wave sources for these detectors. The talk will review the theoretical and experimental challenges that must be met in order to successfully detect gravitational waves from coalescing black hole binaries, and to be able to reliably measure the physical parameters of the source (masses, spins, ...).
Arithmetic operations in optical computations using a modified trinary number system.
Datta, A K; Basuray, A; Mukhopadhyay, S
1989-05-01
A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.
Modeling the binary circumstellar medium of Type IIb/L/n supernova progenitors
NASA Astrophysics Data System (ADS)
Kolb, Christopher; Blondin, John; Borkowski, Kazik; Reynolds, Stephen
2018-01-01
Circumstellar interaction in close binary systems can produce a highly asymmetric environment, particularly for systems with a mass outflow velocity comparable to the binary orbital speed. This asymmetric circumstellar medium (CSM) becomes visible after a supernova explosion, when SN radiation illuminates the gas and when SN ejecta collide with the CSM. We aim to better understand the development of this asymmetric CSM, particularly for binary systems containing a red supergiant progenitor, and to study its impact on supernova morphology. To achieve this, we model the asymmetric wind and subsequent supernova explosion in full 3D hydrodynamics using the shock-capturing hydro code VH-1 on a spherical yin-yang grid. Wind interaction is computed in a frame co-rotating with the binary system, and gas is accelerated using a radiation pressure-driven wind model where optical depth of the radiative force is dependent on azimuthally-averaged gas density. We present characterization of our asymmetric wind density distribution model by fitting a polar-to-equatorial density contrast function to free parameters such as binary separation distance, primary mass loss rate, and binary mass ratio.
X-ray Spectral Formation In High-mass X-ray Binaries: The Case Of Vela X-1
NASA Astrophysics Data System (ADS)
Akiyama, Shizuka; Mauche, C. W.; Liedahl, D. A.; Plewa, T.
2007-05-01
We are working to develop improved models of radiatively-driven mass flows in the presence of an X-ray source -- such as in X-ray binaries, cataclysmic variables, and active galactic nuclei -- in order to infer the physical properties that determine the X-ray spectra of such systems. The models integrate a three-dimensional time-dependent hydrodynamics capability (FLASH); a comprehensive and uniform set of atomic data, improved calculations of the line force multiplier that account for X-ray photoionization and non-LTE population kinetics, and X-ray emission-line models appropriate to X-ray photoionized plasmas (HULLAC); and a Monte Carlo radiation transport code that simulates Compton scattering and recombination cascades following photoionization. As a test bed, we have simulated a high-mass X-ray binary with parameters appropriate to Vela X-1. While the orbital and stellar parameters of this system are well constrained, the physics of X-ray spectral formation is less well understood because the canonical analytical wind velocity profile of OB stars does not account for the dynamical and radiative feedback effects due to the rotation of the system and to the irradiation of the stellar wind by X-rays from the neutron star. We discuss the dynamical wind structure of Vela X-1 as determined by the FLASH simulation, where in the binary the X-ray emission features originate, and how the spatial and spectral properties of the X-ray emission features are modified by Compton scattering, photoabsorption, and fluorescent emission. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.
NASA Astrophysics Data System (ADS)
Lopez, Martin; Batta, Aldo; Ramírez-Ruiz, Enrico
2018-01-01
Globular clusters have about a thousand times denser stellar environments than our Milky Way. This crowded setting leads to many interactions between inhabitants of the cluster and the formation of a whole myriad of exotic objects. One such object is a binary system that forms which is composed of two stellar mass black holes (BHs). Due to the recent detection of gravitational waves (GWs), we know that some of these BH binaries (BHBs) are able to merge. Upon coalescence, BHBs produce GW signals that can be measured by the Laser Interferometer Gravitational-Wave Observatory (LIGO) group on Earth. Spin is one such parameter that LIGO can estimate from the type of signals they observe and as such can be used to constrain their production site. After these BHBs are assembled in dense stellar systems they can continue to interact with other members, either through tidal interactions or physical collisions. When a BHB tidally disrupts a star, a significant fraction of the debris can be accreted by the binary, effectively altering the spin of the BH members. Therefore, although a dynamically formed BHB will initially have low randomly aligned spins, through these types of interactions their birth spins can be significantly altered both in direction and magnitude. We have used a Lagrangian 3D Smoothed Particle Hydrodynamics (SPH) code GADGET-3 to simulate these interactions. Our results allow us to understand whether accretion from a tidal disruption event can significantly alter the birth properties of dynamically assembled BHBs such as spin, mass, and orbital attributes. The implications of these results will help us constrain the properties of BHBs in dense stellar systems in anticipation of an exciting decade ahead of us.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
Simulating Gravitational Radiation from Binary Black Holes Mergers as LISA Sources
NASA Technical Reports Server (NTRS)
Baker, John
2005-01-01
A viewgraph presentation on the simulation of gravitational waves from Binary Massive Black Holes with LISA observations is shown. The topics include: 1) Massive Black Holes (MBHs); 2) MBH Binaries; 3) Gravitational Wavws from MBH Binaries; 4) Observing with LISA; 5) How LISA sees MBH binary mergers; 6) MBH binary inspirals to LISA; 7) Numerical Relativity Simulations; 8) Numerical Relativity Challenges; 9) Recent Successes; 10) Goddard Team; 11) Binary Black Hole Simulations at Goddard; 12) Goddard Recent Advances; 13) Baker, et al.:GSFC; 13) Starting Farther Out; 14) Comparing Initial Separation; 15) Now with AMR; and 16) Conclusion.
Luminous Binary Supersoft X-Ray Sources
NASA Technical Reports Server (NTRS)
DiStefano, Rosanne; Oliversen, Ronald J. (Technical Monitor)
2002-01-01
This grant was for the study of Luminous Supersoft X-Ray Sources (SSSs). During the first year a number of projects were completed and new projects were started. The projects include: 1) Time variability of SSSs 2) SSSs in M31; 3) Binary evolution scenarios; and 4) Acquiring new data.
Automating tasks in protein structure determination with the clipper python module
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M.; Hoh, Soon Wen; Jenkins, Huw T.; Dodson, Eleanor
2017-01-01
Abstract Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine‐independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo‐microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. PMID:28901669
NASA Astrophysics Data System (ADS)
Boettcher, M. A.; Butt, B. M.; Klinkner, S.
2016-10-01
A major concern of a university satellite mission is to download the payload and the telemetry data from a satellite. While the ground station antennas are in general easy and with limited afford to procure, the receiving unit is most certainly not. The flexible and low-cost software-defined radio (SDR) transceiver "BladeRF" is used to receive the QPSK modulated and CCSDS compliant coded data of a satellite in the HAM radio S-band. The control software is based on the Open Source program GNU Radio, which also is used to perform CCSDS post processing of the binary bit stream. The test results show a good performance of the receiving system.
The Busot Observatory: towards a robotic autonomous telescope
NASA Astrophysics Data System (ADS)
García-Lozano, R.; Rodes, J. J.; Torrejón, J. M.; Bernabéu, G.; Berná, J. Á.
2016-12-01
We describe the Busot observatory, our project of a fully robotic autonomous telescope. This astronomical observatory, which obtained the Minor Planet Centre code MPC-J02 in 2009, includes a 14 inch MEADE LX200GPS telescope, a 2 m dome, a ST8-XME CCD camera from SBIG, with an AO-8 adaptive optics system, and a filter wheel equipped with UBVRI system. We are also implementing a spectrograph SGS ST-8 for the telescope. Currently, we are involved in long term studies of variable sources such as X-ray binaries systems, and variable stars. In this work we also present the discovery of W UMa systems and its orbital periods derived from the photometry light curve obtained at Busot Observatory.
NASA Technical Reports Server (NTRS)
Centrella, Joan
2012-01-01
The final merger of two black holes is expected to be the strongest source of gravitational waves for both ground-based detectors such as LIGO and VIRGO, as well as future. space-based detectors. Since the merger takes place in the regime of strong dynamical gravity, computing the resulting gravitational waveforms requires solving the full Einstein equations of general relativity on a computer. For many years, numerical codes designed to simulate black hole mergers were plagued by a host of instabilities. However, recent breakthroughs have conquered these instabilities and opened up this field dramatically. This talk will focus on.the resulting 'gold rush' of new results that is revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, testing general relativity, and astrophysics
NASA Technical Reports Server (NTRS)
Centrella, Joan
2010-01-01
The final merger of two black holes is expected to be the strongest source of gravitational waves for both ground-based detectors such as LIGO and VIRGO, as well as the space-based LISA. Since the merger takes place in the regime of strong dynamical gravity, computing the resulting gravitational waveforms requires solving the full Einstein equations of general relativity on a computer. For many years, numerical codes designed to simulate black hole mergers were plagued by a host of instabilities. However, recent breakthroughs have conquered these instabilities and opened up this field dramatically. This talk will focus on the resulting gold rush of new results that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wove detection, testing general relativity, and astrophysics.
NASA Technical Reports Server (NTRS)
Centrella, Joan
2010-01-01
The final merger of two black holes is expected to be the strongest source of gravitational waves for both ground-based detectors such as LIGO and VIRGO, as well as the space-based LISA. Since the merger takes place in the regime of strong dynamical gravity, computing the resulting gravitational waveforms requires solving the full Einstein equations of general relativity on a computer. For many years, numerical codes designed to simulate black hole mergers were plagued by a host of instabilities. However, recent breakthroughs have conquered these instabilities and opened up this field dramatically. This talk will focus on the resulting gold rush of new results that are revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, testing general relativity, and astrophysics.
NASA Astrophysics Data System (ADS)
Jung, Y. K.; Udalski, A.; Bond, I. A.; Yee, J. C.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Kim, S.-L.; Hwang, K.-H.; Chung, S.-J.; Ryu, Y.-H.; Shin, I.-G.; Zhu, W.; Cha, S.-M.; Kim, D.-J.; Lee, Y.; Park, B.-G.; Kim, H.-W.; Pogge, R. W.; KMTNet Collaboration; Skowron, J.; Szymański, M. K.; Poleski, R.; Mróz, P.; Kozłowski, S.; Pietrukowicz, P.; Soszyński, I.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Abe, F.; Bennett, D. P.; Barry, R.; Sumi, T.; Asakura, Y.; Bhattacharya, A.; Donachie, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Rattenbury, N. J.; Evans, P.; Sharan, A.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Yamada, T.; Yamada, T.; Yonehara, A.; MOA Collaboration
2017-06-01
We report the analysis of the first resolved caustic-crossing binary-source microlensing event OGLE-2016-BLG-1003. The event is densely covered by round-the-clock observations of three surveys. The light curve is characterized by two nested caustic-crossing features, which is unusual for typical caustic-crossing perturbations. From the modeling of the light curve, we find that the anomaly is produced by a binary source passing over a caustic formed by a binary lens. The result proves the importance of high-cadence and continuous observations, and the capability of second-generation microlensing experiments to identify such complex perturbations that are previously unknown. However, the result also raises the issues of the limitations of current analysis techniques for understanding lens systems beyond two masses and of determining the appropriate multiband observing strategy of survey experiments.
Close encounters of the third-body kind. [intruding bodies in binary star systems
NASA Technical Reports Server (NTRS)
Davies, M. B.; Benz, W.; Hills, J. G.
1994-01-01
We simulated encounters involving binaries of two eccentricities: e = 0 (i.e., circular binaries) and e = 0.5. In both cases the binary contained a point mass of 1.4 solar masses (i.e., a neutron star) and a 0.8 solar masses main-sequence star modeled as a polytrope. The semimajor axes of both binaries were set to 60 solar radii (0.28 AU). We considered intruders of three masses: 1.4 solar masses (a neutron star), 0.8 solar masses (a main-sequence star or a higher mass white dwarf), and 0.64 solar masses (a more typical mass white dwarf). Our strategy was to perform a large number (40,000) of encounters using a three-body code, then to rerun a small number of cases with a three-dimensional smoothed particle hydrodynamics (SPH) code to determine the importance of hydrodynamical effects. Using the results of the three-body runs, we computed the exchange across sections, sigma(sub ex). From the results of the SPH runs, we computed the cross sections for clean exchange, denoted by sigma(sub cx); the formation of a triple system, denoted by sigma(sub trp); and the formation of a merged binary with an object formed from the merger of two of the stars left in orbit around the third star, denoted by sigma(sub mb). For encounters between either binary and a 1.4 solar masses neutron star, sigma(sub cx) approx. 0.7 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 0.3 sigma(sub ex). For encounters between either binary and the 0.8 solar masses main-sequence star, sigma(sub cx) approx. 0.50 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.0 sigma(sub ex). If the main sequence star is replaced by a main-sequence star of the same mass, we have sigma(sub cx) approx. 0.5 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.6 sigma(sub ex). Although the exchange cross section is a sensitive function of intruder mass, we see that the cross section to produce merged binaries is roughly independent of intruder mass. The merged binaries produced have semi-major axes much larger than either those of the original binaries or those of binaries produced in clean exchanges. Coupled with their lower kick velocities, received from the encounters, their larger size will enhance their cross section, shortening the waiting time to a subsequent encounter with another single star.
Optical gravitational lensing experiment: OGLE-1999-BUL-19 - the first multipeak parallax event
NASA Astrophysics Data System (ADS)
Smith, Martin C.; Mao, Shude; Woźniak, P.; Udalski, A.; Szymański, M.; Kubiak, M.; Pietrzyński, G.; Soszyński, I.; Żebruń, K.
2002-10-01
We describe a highly unusual microlensing event, OGLE-1999-BUL-19. Unlike most standard microlensing events, this event exhibits multiple peaks in its light curve. The Einstein radius crossing time for this event is approximately 1 yr, which is unusually long. We show that the additional peaks in the light curve can be caused by the very small value for the relative transverse velocity of the lens projected on to the observer plane (). Since this value is significantly less than the speed of the orbit of the Earth around the Sun (v⊕~ 30km s-1), the motion of the Earth induces these multiple peaks in the light curve. This value for is the lowest velocity so far published and we believe that this is the first multiple-peak parallax event ever observed. We also found that the event can be somewhat better fitted by a rotating binary-source model, although this is to be expected since every parallax microlensing event can be exactly reproduced by a suitable binary-source model. A face-on rotating binary-lens model was also identified, but this provides a significantly worse fit. We conclude that the most likely cause for this multipeak behaviour is parallax microlensing rather than microlensing by a binary source. However, this event may be exhibiting a slight binary-source signature in addition to these parallax-induced multiple peaks. With spectroscopic observations it is possible to test this `parallax plus binary-source' hypothesis and (in the instance that the hypothesis turns out to be correct) to simultaneously fit both models and obtain a measurement of the lens mass. Furthermore, spectroscopic observations could also supply information regarding the lens properties, possibly providing another avenue for determining the lens mass. We also investigated the nature of the blending for this event, and found that the majority of the I-band blending is contributed by a source roughly aligned with the lensed source. This implies that most of the I-band blending is caused by light from the lens or a binary companion to the source. However, in the V band, there appears to be a second blended source 0.35 arcsec away from the lensed source. Hubble Space Telescope observations will be very useful for understanding the nature of the blends. We also suggest that a radial velocity survey of all parallax events will be very useful for further constraining the lensing kinematics and understanding the origins of these events and the excess of long events toward the bulge.
Gamma-rays from the binary system containing PSR J2032+4127 during its periastron passage
NASA Astrophysics Data System (ADS)
Bednarek, Włodek; Banasiński, Piotr; Sitarek, Julian
2018-01-01
The energetic pulsar, PSR J2032+4127, has recently been discovered in the direction of the unidentified HEGRA TeV γ-ray source (TeV J2032+4130). It is proposed that this pulsar forms a binary system with the Be type star, MT91 213, expected to reach periastron late in 2017. We performed detailed calculations of the γ-ray emission produced close to the binary system’s periastron passage by applying a simple geometrical model. Electrons accelerated at the collision region of pulsar and stellar winds initiate anisotropic inverse Compton {e}+/- pair cascades by scattering soft radiation from the massive companion. The γ-ray spectra, from such a comptonization process, are compared with the measurements of the extended TeV γ-ray emission from the HEGRA TeV γ-ray source. We discuss conditions within the binary system, at the periastron passage of the pulsar, for which the γ-ray emission from the binary can overcome the extended, steady TeV γ-ray emission from the HEGRA TeV γ-ray source.
LISA Sources in Milky Way Globular Clusters
NASA Astrophysics Data System (ADS)
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L.; Larson, Shane L.; Rasio, Frederic A.
2018-05-01
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ˜21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ˜7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
LISA Sources in Milky Way Globular Clusters.
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L; Larson, Shane L; Rasio, Frederic A
2018-05-11
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ∼21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ∼7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
Bondi-Hoyle-Lyttleton Accretion onto Binaries
NASA Astrophysics Data System (ADS)
Antoni, Andrea; MacLeod, Morgan; Ramírez-Ruiz, Enrico
2018-01-01
Binary stars are not rare. While only close binary stars will eventually interact with one another, even the widest binary systems interact with their gaseous surroundings. The rates of accretion and the gaseous drag forces arising in these interactions are the key to understanding how these systems evolve. This poster examines accretion flows around a binary system moving supersonically through a background gas. We perform three-dimensional hydrodynamic simulations of Bondi-Hoyle-Lyttleton accretion using the adaptive mesh refinement code FLASH. We simulate a range of values of semi-major axis of the orbit relative to the gravitational focusing impact parameter of the pair. On large scales, gas is gravitationally focused by the center-of-mass of the binary, leading to dynamical friction drag and to the accretion of mass and momentum. On smaller scales, the orbital motion imprints itself on the gas. Notably, the magnitude and direction of the forces acting on the binary inherit this orbital dependence. The long-term evolution of the binary is determined by the timescales for accretion, slow down of the center-of-mass, and decay of the orbit. We use our simulations to measure these timescales and to establish a hierarchy between them. In general, our simulations indicate that binaries moving through gaseous media will slow down before the orbit decays.
COSMIC probes into compact binary formation and evolution
NASA Astrophysics Data System (ADS)
Breivik, Katelyn
2018-01-01
The population of compact binaries in the galaxy represents the final state of all binaries that have lived up to the present epoch. Compact binaries present a unique opportunity to probe binary evolution since many of the interactions binaries experience can be imprinted on the compact binary population. By combining binary evolution simulations with catalogs of observable compact binary systems, we can distill the dominant physical processes that govern binary star evolution, as well as predict the abundance and variety of their end products.The next decades herald a previously unseen opportunity to study compact binaries. Multi-messenger observations from telescopes across all wavelengths and gravitational-wave observatories spanning several decades of frequency will give an unprecedented view into the structure of these systems and the composition of their components. Observations will not always be coincident and in some cases may be separated by several years, providing an avenue for simulations to better constrain binary evolution models in preparation for future observations.I will present the results of three population synthesis studies of compact binary populations carried out with the Compact Object Synthesis and Monte Carlo Investigation Code (COSMIC). I will first show how binary-black-hole formation channels can be understood with LISA observations. I will then show how the population of double white dwarfs observed with LISA and Gaia could provide a detailed view of mass transfer and accretion. Finally, I will show that Gaia could discover thousands black holes in the Milky Way through astrometric observations, yielding view into black-hole astrophysics that is complementary to and independent from both X-ray and gravitational-wave astronomy.
Probing the Milky Way electron density using multi-messenger astronomy
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane
2015-04-01
Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Amalian, Jean-Arthur; Trinh, Thanh Tam; Lutz, Jean-François; Charles, Laurence
2016-04-05
Tandem mass spectrometry was evaluated as a reliable sequencing methodology to read codes encrypted in monodisperse sequence-coded oligo(triazole amide)s. The studied oligomers were composed of monomers containing a triazole ring, a short ethylene oxide segment, and an amide group as well as a short alkyl chain (propyl or isobutyl) which defined the 0/1 molecular binary code. Using electrospray ionization, oligo(triazole amide)s were best ionized as protonated molecules and were observed to adopt a single charge state, suggesting that adducted protons were located on every other monomer unit. Upon collisional activation, cleavages of the amide bond and of one ether bond were observed to proceed in each monomer, yielding two sets of complementary product ions. Distribution of protons over the precursor structure was found to remain unchanged upon activation, allowing charge state to be anticipated for product ions in the four series and hence facilitating their assignment for a straightforward characterization of any encoded oligo(triazole amide)s.
Poynting-Flux-Driven Bubbles and Shocks Around Merging Neutron Star Binaries
NASA Astrophysics Data System (ADS)
Medvedev, M. V.; Loeb, A.
2013-04-01
Merging binaries of compact relativistic objects are thought to be progenitors of short gamma-ray bursts. Because of the strong magnetic field of one or both binary members and high orbital frequencies, these binaries are strong sources of energy in the form of Poynting flux. The steady injection of energy by the binary forms a bubble filled with matter with the relativistic equation of state, which pushes on the surrounding plasma and can drive a shock wave in it. Unlike the Sedov-von Neumann-Taylor blast wave solution for a point-like explosion, the shock wave here is continuously driven by the ever-increasing pressure inside the bubble. We calculate from the first principles the dynamics and evolution of the bubble and the shock surrounding it, demonstrate that it exhibits finite time singularity and find the corresponding analytical solution. We predict that such binaries can be observed as radio sources a few hours before and after the merger.
Detectability of gravitational waves from binary black holes: Impact of precession and higher modes
NASA Astrophysics Data System (ADS)
Calderón Bustillo, Juan; Laguna, Pablo; Shoemaker, Deirdre
2017-05-01
Gravitational wave templates used in current searches for binary black holes omit the effects of precession of the orbital plane and higher-order modes. While this omission seems not to impact the detection of sources having mass ratios and spins similar to those of GW150914, even for total masses M >200 M⊙ , we show that it can cause large fractional losses of sensitive volume for binaries with mass ratio q ≥4 and M >100 M⊙, measured in the detector frame. For the highest precessing cases, this is true even when the source is face-on to the detector. Quantitatively, we show that the aforementioned omission can lead to fractional losses of sensitive volume of ˜15 %, reaching >25 % for the worst cases studied. Loss estimates are obtained by evaluating the effectualness of the SEOBNRv2-ROM double spin model, currently used in binary black hole searches, towards gravitational wave signals from precessing binaries computed by means of numerical relativity. We conclude that, for sources with q ≥4 , a reliable search for binary black holes heavier than M >100 M⊙ needs to consider the effects of higher-order modes and precession. The latter seems especially necessary when Advanced LIGO reaches its design sensitivity.
I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.
Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S
2017-12-01
The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.
δ Scuti-type pulsation in the hot component of the Algol-type binary system BG Peg
NASA Astrophysics Data System (ADS)
Şenyüz, T.; Soydugan, E.
2014-02-01
In this study, 23 Algol-type binary systems, which were selected as candidate binaries with pulsating components, were observed at the Çanakkale Onsekiz Mart University Observatory. One of these systems was BG Peg. Its hotter component shows δ Scuti-type light variations. Physical parameters of BG Peg were derived from modelling the V light curve using the Wilson-Devinney code. The frequency analysis shows that the pulsational component of the BG Peg system pulsates in two modes with periods of 0.039 and 0.047 d. Mode identification indicates that both modes are most likely non-radial l = 2 modes.
NASA Astrophysics Data System (ADS)
Dogan, Suzan
2016-07-01
Accretion discs are common in binary systems, and they are often found to be misaligned with respect to the binary orbit. The gravitational torque from a companion induces nodal precession in misaligned disc orbits. In this study, we first calculate whether this precession is strong enough to overcome the internal disc torques communicating angular momentum. We compare the disc precession torque with the disc viscous torque to determine whether the disc should warp or break. For typical parameters precession wins: the disc breaks into distinct planes that precess effectively independently. To check our analytical findings, we perform 3D hydrodynamical numerical simulations using the PHANTOM smoothed particle hydrodynamics code, and confirm that disc breaking is widespread and enhances accretion on to the central object. For some inclinations, the disc goes through strong Kozai cycles. Disc breaking promotes markedly enhanced and variable accretion and potentially produces high-energy particles or radiation through shocks. This would have significant implications for all binary systems: e.g. accretion outbursts in X-ray binaries and fuelling supermassive black hole (SMBH) binaries. The behaviour we have discussed in this work is relevant to a variety of astrophysical systems, for example X-ray binaries, where the disc plane may be tilted by radiation warping, SMBH binaries, where accretion of misaligned gas can create effectively random inclinations and protostellar binaries, where a disc may be misaligned by a variety of effects such as binary capture/exchange, accretion after binary formation.
Marchetti, Luca; Manca, Vincenzo
2015-04-15
MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Using Rose and Compass for Authentication
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, G
2009-07-09
Many recent non-proliferation software projects include a software authentication component. In this context, 'authentication' is defined as determining that a software package performs only its intended purpose and performs that purpose correctly and reliably over many years. In addition to visual inspection by knowledgeable computer scientists, automated tools are needed to highlight suspicious code constructs both to aid the visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary, and have limited extensibility. An open-source, extensible tool can be customized to the unique needs of each project. ROSEmore » is an LLNL-developed robust source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C, C++, and FORTRAN. It continues to be extended to support the automated analysis of binaries (x86, ARM, and PowerPC). We continue to extend ROSE to address a number of security specific requirements and apply it to software authentication for non-proliferation projects. We will give an update on the status of our work.« less
cit: hypothesis testing software for mediation analysis in genomic applications.
Millstein, Joshua; Chen, Gary K; Breton, Carrie V
2016-08-01
The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
"When 'Bad' is 'Good'": Identifying Personal Communication and Sentiment in Drug-Related Tweets.
Daniulaityte, Raminta; Chen, Lu; Lamy, Francois R; Carlson, Robert G; Thirunarayan, Krishnaprasad; Sheth, Amit
2016-10-24
To harness the full potential of social media for epidemiological surveillance of drug abuse trends, the field needs a greater level of automation in processing and analyzing social media content. The objective of the study is to describe the development of supervised machine-learning techniques for the eDrugTrends platform to automatically classify tweets by type/source of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis- and synthetic cannabinoid-related tweets. Tweets were collected using Twitter streaming Application Programming Interface and filtered through the eDrugTrends platform using keywords related to cannabis, marijuana edibles, marijuana concentrates, and synthetic cannabinoids. After creating coding rules and assessing intercoder reliability, a manually labeled data set (N=4000) was developed by coding several batches of randomly selected subsets of tweets extracted from the pool of 15,623,869 collected by eDrugTrends (May-November 2015). Out of 4000 tweets, 25% (1000/4000) were used to build source classifiers and 75% (3000/4000) were used for sentiment classifiers. Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM) were used to train the classifiers. Source classification (n=1000) tested Approach 1 that used short URLs, and Approach 2 where URLs were expanded and included into the bag-of-words analysis. For sentiment classification, Approach 1 used all tweets, regardless of their source/type (n=3000), while Approach 2 applied sentiment classification to personal communication tweets only (2633/3000, 88%). Multiclass and binary classification tasks were examined, and machine-learning sentiment classifier performance was compared with Valence Aware Dictionary for sEntiment Reasoning (VADER), a lexicon and rule-based method. The performance of each classifier was assessed using 5-fold cross validation that calculated average F-scores. One-tailed t test was used to determine if differences in F-scores were statistically significant. In multiclass source classification, the use of expanded URLs did not contribute to significant improvement in classifier performance (0.7972 vs 0.8102 for SVM, P=.19). In binary classification, the identification of all source categories improved significantly when unshortened URLs were used, with personal communication tweets benefiting the most (0.8736 vs 0.8200, P<.001). In multiclass sentiment classification Approach 1, SVM (0.6723) performed similarly to NB (0.6683) and LR (0.6703). In Approach 2, SVM (0.7062) did not differ from NB (0.6980, P=.13) or LR (F=0.6931, P=.05), but it was over 40% more accurate than VADER (F=0.5030, P<.001). In multiclass task, improvements in sentiment classification (Approach 2 vs Approach 1) did not reach statistical significance (eg, SVM: 0.7062 vs 0.6723, P=.052). In binary sentiment classification (positive vs negative), Approach 2 (focus on personal communication tweets only) improved classification results, compared with Approach 1, for LR (0.8752 vs 0.8516, P=.04) and SVM (0.8800 vs 0.8557, P=.045). The study provides an example of the use of supervised machine learning methods to categorize cannabis- and synthetic cannabinoid-related tweets with fairly high accuracy. Use of these content analysis tools along with geographic identification capabilities developed by the eDrugTrends platform will provide powerful methods for tracking regional changes in user opinions related to cannabis and synthetic cannabinoids use over time and across different regions.
Spectroscopic classification of X-ray sources in the Galactic Bulge Survey
NASA Astrophysics Data System (ADS)
Wevers, T.; Torres, M. A. P.; Jonker, P. G.; Nelemans, G.; Heinke, C.; Mata Sánchez, D.; Johnson, C. B.; Gazer, R.; Steeghs, D. T. H.; Maccarone, T. J.; Hynes, R. I.; Casares, J.; Udalski, A.; Wetuski, J.; Britt, C. T.; Kostrzewa-Rutkowska, Z.; Wyrzykowski, Ł.
2017-10-01
We present the classification of 26 optical counterparts to X-ray sources discovered in the Galactic Bulge Survey. We use (time-resolved) photometric and spectroscopic observations to classify the X-ray sources based on their multiwavelength properties. We find a variety of source classes, spanning different phases of stellar/binary evolution. We classify CX21 as a quiescent cataclysmic variable (CV) below the period gap, and CX118 as a high accretion rate (nova-like) CV. CXB12 displays excess UV emission, and could contain a compact object with a giant star companion, making it a candidate symbiotic binary or quiescent low-mass X-ray binary (although other scenarios cannot be ruled out). CXB34 is a magnetic CV (polar) that shows photometric evidence for a change in accretion state. The magnetic classification is based on the detection of X-ray pulsations with a period of 81 ± 2 min. CXB42 is identified as a young stellar object, namely a weak-lined T Tauri star exhibiting (to date unexplained) UX Ori-like photometric variability. The optical spectrum of CXB43 contains two (resolved) unidentified double-peaked emission lines. No known scenario, such as an active galactic nucleus or symbiotic binary, can easily explain its characteristics. We additionally classify 20 objects as likely active stars based on optical spectroscopy, their X-ray to optical flux ratios and photometric variability. In four cases we identify the sources as binary stars.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
THE EFFECT OF UNRESOLVED BINARIES ON GLOBULAR CLUSTER PROPER-MOTION DISPERSION PROFILES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, P.; Norris, M. A.; Ven, G. van de
2016-03-20
High-precision kinematic studies of globular clusters (GCs) require an accurate knowledge of all possible sources of contamination. Among other sources, binary stars can introduce systematic biases in the kinematics. Using a set of Monte Carlo cluster simulations with different concentrations and binary fractions, we investigate the effect of unresolved binaries on proper-motion dispersion profiles, treating the simulations like Hubble Space Telescope proper-motion samples. Since GCs evolve toward a state of partial energy equipartition, more-massive stars lose energy and decrease their velocity dispersion. As a consequence, on average, binaries have a lower velocity dispersion, since they are more-massive kinematic tracers. Wemore » show that, in the case of clusters with high binary fractions (initial binary fractions of 50%) and high concentrations (i.e., closer to energy equipartition), unresolved binaries introduce a color-dependent bias in the velocity dispersion of main-sequence stars of the order of 0.1–0.3 km s{sup −1} (corresponding to 1%−6% of the velocity dispersion), with the reddest stars having a lower velocity dispersion, due to the higher fraction of contaminating binaries. This bias depends on the ability to distinguish binaries from single stars, on the details of the color–magnitude diagram and the photometric errors. We apply our analysis to the HSTPROMO data set of NGC 7078 (M15) and show that no effect ascribable to binaries is observed, consistent with the low binary fraction of the cluster. Our work indicates that binaries do not significantly bias proper-motion velocity-dispersion profiles, but should be taken into account in the error budget of kinematic analyses.« less
Soft decoding a self-dual (48, 24; 12) code
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
Elder, D
1984-06-07
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
2006-06-01
called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus
The fourfold way of the genetic code.
Jiménez-Montaño, Miguel Angel
2009-11-01
We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Choudhary, Kuldeep; Kumar, Santosh
2017-05-01
The application of electro-optic effect in lithium-niobate-based Mach-Zehnder interferometer to design a 3-bit optical pseudorandom binary sequence (PRBS) generator has been proposed, which is characterized by its simplicity of generation and stability. The proposed device is optoelectronic in nature. The PBRS generator is immensely applicable for pattern generation, encryption, and coding applications in optical networks. The study is carried out by simulating the proposed device with beam propagation method.
The progenitors of supernovae Type Ia
NASA Astrophysics Data System (ADS)
Toonen, Silvia
2014-09-01
Despite the significance of Type Ia supernovae (SNeIa) in many fields in astrophysics, SNeIa lack a theoretical explanation. SNeIa are generally thought to be thermonuclear explosions of carbon/oxygen (CO) white dwarfs (WDs). The canonical scenarios involve white dwarfs reaching the Chandrasekhar mass, either by accretion from a non-degenerate companion (single-degenerate channel, SD) or by a merger of two CO WDs (double-degenerate channel, DD). The study of SNeIa progenitors is a very active field of research for binary population synthesis (BPS) studies. The strength of the BPS approach is to study the effect of uncertainties in binary evolution on the macroscopic properties of a binary population, in order to constrain binary evolutionary processes. I will discuss the expected SNeIa rate from the BPS approach and the uncertainties in their progenitor evolution, and compare with current observations. I will also discuss the results of the POPCORN project in which four BPS codes were compared to better understand the differences in the predicted SNeIa rate of the SD channel. The goal of this project is to investigate whether differences in the simulated populations are due to numerical effects or whether they can be explained by differences in the input physics. I will show which assumptions in BPS codes affect the results most and hence should be studied in more detail.
Period variation studies of six contact binaries in M4
NASA Astrophysics Data System (ADS)
Rukmini, Jagirdar; Shanti Priya, Devarapalli
2018-04-01
We present the first period study of six contact binaries in the closest globular cluster M4 the data collected from June 1995‑June 2009 and Oct 2012‑Sept 2013. New times of minima are determined for all the six variables and eclipse timing (O-C) diagrams along with the quadratic fit are presented. For all the variables, the study of (O-C) variations reveals changes in the periods. In addition, the fundamental parameters for four of the contact binaries obtained using the Wilson-Devinney code (v2003) are presented. Planned observations of these binaries using the 3.6-m Devasthal Optical Telescope (DOT) and the 4-m International Liquid Mirror Telescope (ILMT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES; Nainital) can throw light on their evolutionary status from long term period variation studies.
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-04-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-07-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.
Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects
ERIC Educational Resources Information Center
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.
2018-01-01
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
Chandra reveals a black hole X-ray binary within the ultraluminous supernova remnant MF 16
NASA Astrophysics Data System (ADS)
Roberts, T. P.; Colbert, E. J. M.
2003-06-01
We present evidence, based on Chandra ACIS-S observations of the nearby spiral galaxy NGC 6946, that the extraordinary X-ray luminosity of the MF 16 supernova remnant actually arises in a black hole X-ray binary. This conclusion is drawn from the point-like nature of the X-ray source, its X-ray spectrum closely resembling the spectrum of other ultraluminous X-ray sources thought to be black hole X-ray binary systems, and the detection of rapid hard X-ray variability from the source. We briefly discuss the nature of the hard X-ray variability, and the origin of the extreme radio and optical luminosity of MF 16 in light of this identification.
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
Numerical simulations of merging black holes for gravitational-wave astronomy
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2014-03-01
Gravitational waves from merging binary black holes (BBHs) are among the most promising sources for current and future gravitational-wave detectors. Accurate models of these waves are necessary to maximize the number of detections and our knowledge of the waves' sources; near the time of merger, the waves can only be computed using numerical-relativity simulations. For optimal application to gravitational-wave astronomy, BBH simulations must achieve sufficient accuracy and length, and all relevant regions of the BBH parameter space must be covered. While great progress toward these goals has been made in the almost nine years since BBH simulations became possible, considerable challenges remain. In this talk, I will discuss current efforts to meet these challenges, and I will present recent BBH simulations produced using the Spectral Einstein Code, including a catalog of publicly available gravitational waveforms [black-holes.org/waveforms]. I will also discuss simulations of merging black holes with high mass ratios and with spins nearly as fast as possible, the most challenging regions of the BBH parameter space.
Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik
2015-01-01
The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.
CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer
Carter, Hannah; Diekhans, Mark; Ryan, Michael C.; Karchin, Rachel
2011-01-01
Summary: Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. Availability and Implementation: MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM. Contact: karchin@jhu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21685053
Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A.; Lundström, Patrik
2015-01-01
The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628
Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code
Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...
2015-06-03
Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less
Rcount: simple and flexible RNA-Seq read counting.
Schmid, Marc W; Grossniklaus, Ueli
2015-02-01
Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
RNAiFold 2.0: a web server and software to design custom and Rfam-based RNA molecules.
Garcia-Martin, Juan Antonio; Dotu, Ivan; Clote, Peter
2015-07-01
Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. the web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Soft X-ray production by photon scattering in pulsating binary neutron star sources
NASA Technical Reports Server (NTRS)
Bussard, R. W.; Meszaros, P.; Alexander, S.
1985-01-01
A new mechanism is proposed as a source of soft (less than 1 keV) radiation in binary pulsating X-ray sources, in the form of photon scattering which leaves the electron in an excited Landau level. In a plasma with parameters typical of such sources, the low-energy X-ray emissivity of this mechanism far exceeds that of bremsstrahlung. This copious source of soft photons is quite adequate to provide the seed photons needed to explain the power-law hard X-ray spectrum by inverse Comptonization on the hot electrons at the base of the accretion column.
Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State
NASA Astrophysics Data System (ADS)
Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan
2017-01-01
I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laycock, Silas; Cappallo, Rigel; Williams, Benjamin F.
We have monitored the Cassiopeia dwarf galaxy (IC 10) in a series of 10 Chandra ACIS-S observations to capture its variable and transient X-ray source population, which is expected to be dominated by High Mass X-ray Binaries (HMXBs). We present a sample of 21 X-ray sources that are variable between observations at the 3 σ level, from a catalog of 110 unique point sources. We find four transients (flux variability ratio greater than 10) and a further eight objects with ratios >5. The observations span the years 2003–2010 and reach a limiting luminosity of >10{sup 35} erg s{sup −1}, providingmore » sensitivity to X-ray binaries in IC 10 as well as flare stars in the foreground Milky Way. The nature of the variable sources is investigated from light curves, X-ray spectra, energy quantiles, and optical counterparts. The purpose of this study is to discover the composition of the X-ray binary population in a young starburst environment. IC 10 provides a sharp contrast in stellar population age (<10 My) when compared to the Magellanic Clouds (40–200 My) where most of the known HMXBs reside. We find 10 strong HMXB candidates, 2 probable background Active Galactic Nuclei, 4 foreground flare-stars or active binaries, and 5 not yet classifiable sources. Complete classification of the sample requires optical spectroscopy for radial velocity analysis and deeper X-ray observations to obtain higher S/N spectra and search for pulsations. A catalog and supporting data set are provided.« less
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
NASA Astrophysics Data System (ADS)
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
Visualising interacting binaries in 3D
NASA Astrophysics Data System (ADS)
Hynes, R. I.
2002-01-01
I have developed a code which allows images to be produced of a variety of interacting binaries for any system parameters. The resulting images are not only helpful in visualising the geometry of a given system but are also helpful in talks and educational work. I would like to acknowledge financial support from the Leverhulme Trust, and to thank Dan Rolfe for many discussions on how to represent interacting binaries and the users of BinSim who have provided valuable testing and feedback. BinSim would not have been possible without the efforts of Brian Paul and others responsible for the Mesa 3-D graphics library -- Mesa 3-D graphics .
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
Computer Forensics Education - the Open Source Approach
NASA Astrophysics Data System (ADS)
Huebner, Ewa; Bem, Derek; Cheung, Hon
In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.
Characterizing Black Hole Mergers
NASA Technical Reports Server (NTRS)
Baker, John; Boggs, William Darian; Kelly, Bernard
2010-01-01
Binary black hole mergers are a promising source of gravitational waves for interferometric gravitational wave detectors. Recent advances in numerical relativity have revealed the predictions of General Relativity for the strong burst of radiation generated in the final moments of binary coalescence. We explore features in the merger radiation which characterize the final moments of merger and ringdown. Interpreting the waveforms in terms of an rotating implicit radiation source allows a unified phenomenological description of the system from inspiral through ringdown. Common features in the waveforms allow quantitative description of the merger signal which may provide insights for observations large-mass black hole binaries.
BamTools: a C++ API and toolkit for analyzing and managing BAM files.
Barnett, Derek W; Garrison, Erik K; Quinlan, Aaron R; Strömberg, Michael P; Marth, Gabor T
2011-06-15
Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools.
Predicting Binary Black Hole Collisions Using Numerical Methods in Collaboration with LIGO
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey
2015-04-01
Detecting astronomical gravitational waves will soon open a new window on the universe. The effects of gravitational waves have already been seen indirectly, but a direct observation of these waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, will soon begin searching for gravitational waves, and the first direct detections are likely in the next few years. To help LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this presentation, I will discuss new supercomputer simulations of merging black holes--some of the brightest sources of gravitational waves--that I have completed using the Spectral Einstein Code (http://www.black-holes.org/SpEC.html).
Black Hole Mergers, Gravitational Waves, and Multi-Messenger Astronomy
NASA Technical Reports Server (NTRS)
Centrella, Joan M.
2010-01-01
The final merger of two black holes is expected to be the strongest source of gravitational waves for both ground-based detectors such as LIGO and VIRGO, as well as the space-based LISA. Since the merger takes place in the regime of strong dynamical gravity, computing the resulting gravitational waveforms requires solving the full Einstein equations of general relativity on a computer. Although numerical codes designed to simulate black hole mergers were plagued for many years by a host of instabilities, recent breakthroughs have conquered these problems and opened up this field dramatically. This talk will focus on the resulting gold rush of new results that is revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, astrophysics, and testing general relativity.
EXODUS II: A finite element data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoof, L.A.; Yarberry, V.R.
1994-09-01
EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).
NASA Technical Reports Server (NTRS)
1972-01-01
Here, the 7400 line of transistor to transistor logic (TTL) devices is emphasized almost exclusively where hardware is concerned. However, it should be pointed out that the logic theory contained herein applies to all hardware. Binary numbers, simplification of logic circuits, code conversion circuits, basic flip-flop theory, details about series 54/7400, and asynchronous circuits are discussed.
Self-Consistent Sources for Integrable Equations Via Deformations of Binary Darboux Transformations
NASA Astrophysics Data System (ADS)
Chvartatskyi, Oleksandr; Dimakis, Aristophanes; Müller-Hoissen, Folkert
2016-08-01
We reveal the origin and structure of self-consistent source extensions of integrable equations from the perspective of binary Darboux transformations. They arise via a deformation of the potential that is central in this method. As examples, we obtain in particular matrix versions of self-consistent source extensions of the KdV, Boussinesq, sine-Gordon, nonlinear Schrödinger, KP, Davey-Stewartson, two-dimensional Toda lattice and discrete KP equation. We also recover a (2+1)-dimensional version of the Yajima-Oikawa system from a deformation of the pKP hierarchy. By construction, these systems are accompanied by a hetero binary Darboux transformation, which generates solutions of such a system from a solution of the source-free system and additionally solutions of an associated linear system and its adjoint. The essence of all this is encoded in universal equations in the framework of bidifferential calculus.
On the formation of runaway stars BN and x in the Orion Nebula Cluster
NASA Astrophysics Data System (ADS)
Farias, J. P.; Tan, J. C.
2018-05-01
We explore scenarios for the dynamical ejection of stars BN and x from source I in the Kleinmann-Low nebula of the Orion Nebula Cluster (ONC), which is important because it is the closest region of massive star formation. This ejection would cause source I to become a close binary or a merger product of two stars. We thus consider binary-binary encounters as the mechanism to produce this event. By running a large suite of N-body simulations, we find that it is nearly impossible to match the observations when using the commonly adopted masses for the participants, especially a source I mass of 7 M⊙. The only way to recreate the event is if source I is more massive, that is, 20 M⊙. However, even in this case, the likelihood of reproducing the observed system is low. We discuss the implications of these results for understanding this important star-forming region.
Glynn, P.D.
1991-01-01
The computer code MBSSAS uses two-parameter Margules-type excess-free-energy of mixing equations to calculate thermodynamic equilibrium, pure-phase saturation, and stoichiometric saturation states in binary solid-solution aqueous-solution (SSAS) systems. Lippmann phase diagrams, Roozeboom diagrams, and distribution-coefficient diagrams can be constructed from the output data files, and also can be displayed by MBSSAS (on IBM-PC compatible computers). MBSSAS also will calculate accessory information, such as the location of miscibility gaps, spinodal gaps, critical-mixing points, alyotropic extrema, Henry's law solid-phase activity coefficients, and limiting distribution coefficients. Alternatively, MBSSAS can use such information (instead of the Margules, Guggenheim, or Thompson and Waldbaum excess-free-energy parameters) to calculate the appropriate excess-free-energy of mixing equation for any given SSAS system. ?? 1991.
Extra Solar Planet Science With a Non Redundant Mask
NASA Astrophysics Data System (ADS)
Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi
2017-01-01
To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.
Analysis of the possibility of using G.729 codec for steganographic transmission
NASA Astrophysics Data System (ADS)
Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław
2017-04-01
Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
A Chemical Alphabet for Macromolecular Communications.
Giannoukos, Stamatios; McGuiness, Daniel Tunç; Marshall, Alan; Smith, Jeremy; Taylor, Stephen
2018-06-08
Molecular communications in macroscale environments is an emerging field of study driven by the intriguing prospect of sending coded information over olfactory networks. For the first time, this article reports two signal modulation techniques (on-off keying-OOK, and concentration shift keying-CSK) which have been used to encode and transmit digital information using odors over distances of 1-4 m. Molecular transmission of digital data was experimentally investigated for the letter "r" with a binary value of 01110010 (ASCII) for a gas stream network channel (up to 4 m) using mass spectrometry (MS) as the main detection-decoding system. The generation and modulation of the chemical signals was achieved using an automated odor emitter (OE) which is based on the controlled evaporation of a chemical analyte and its diffusion into a carrier gas stream. The chemical signals produced propagate within a confined channel to reach the demodulator-MS. Experiments were undertaken for a range of volatile organic compounds (VOCs) with different diffusion coefficient values in air at ambient conditions. Representative compounds investigated include acetone, cyclopentane, and n-hexane. For the first time, the binary code ASCII (American Standard Code for Information Interchange) is combined with chemical signaling to generate a molecular representation of the English alphabet. Transmission experiments of fixed-width molecular signals corresponding to letters of the alphabet over varying distances are shown. A binary message corresponding to the word "ion" was synthesized using chemical signals and transmitted within a physical channel over a distance of 2 m.
The modelling of heat, mass and solute transport in solidification systems
NASA Technical Reports Server (NTRS)
Voller, V. R.; Brent, A. D.; Prakash, C.
1989-01-01
The aim of this paper is to explore the range of possible one-phase models of binary alloy solidification. Starting from a general two-phase description, based on the two-fluid model, three limiting cases are identified which result in one-phase models of binary systems. Each of these models can be readily implemented in standard single phase flow numerical codes. Differences between predictions from these models are examined. In particular, the effects of the models on the predicted macro-segregation patterns are evaluated.
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 04-10-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 12 May 2014 to 11 May 2016 4. TITLE ...industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...virus software based on binary signatures. A popular method in industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we
Eclipsing Stellar Binaries in the Galactic Center
NASA Astrophysics Data System (ADS)
Li, Gongjie; Ginsburg, Idan; Naoz, Smadar; Loeb, Abraham
2017-12-01
Compact stellar binaries are expected to survive in the dense environment of the Galactic center. The stable binaries may undergo Kozai–Lidov oscillations due to perturbations from the central supermassive black hole (Sgr A*), yet the general relativistic precession can suppress the Kozai–Lidov oscillations and keep the stellar binaries from merging. However, it is challenging to resolve the binary sources and distinguish them from single stars. The close separations of the stable binaries allow higher eclipse probabilities. Here, we consider the massive star SO-2 as an example and calculate the probability of detecting eclipses, assuming it is a binary. We find that the eclipse probability is ∼30%–50%, reaching higher values when the stellar binary is more eccentric or highly inclined relative to its orbit around Sgr A*.
Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.
Coding and decoding in a point-to-point communication using the polarization of the light beam.
Kavehvash, Z; Massoumian, F
2008-05-10
A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
Dark jets in the soft X-ray state of black hole binaries?
NASA Astrophysics Data System (ADS)
Drappeau, S.; Malzac, J.; Coriat, M.; Rodriguez, J.; Belloni, T. M.; Belmont, R.; Clavel, M.; Chakravorty, S.; Corbel, S.; Ferreira, J.; Gandhi, P.; Henri, G.; Petrucci, P.-O.
2017-04-01
X-ray binary observations led to the interpretation that powerful compact jets, produced in the hard state, are quenched when the source transitions to its soft state. The aim of this paper is to discuss the possibility that a powerful dark jet is still present in the soft state. Using the black hole X-ray binaries GX339-4 and H1743-322 as test cases, we feed observed X-ray power density spectra in the soft state of these two sources to an internal shock jet model. Remarkably, the predicted radio emission is consistent with current upper limits. Our results show that for these two sources, a compact dark jet could persist in the soft state with no major modification of its kinetic power compared to the hard state.
Formation and Evolution of X-ray Binaries
NASA Astrophysics Data System (ADS)
Shao, Y.
2017-07-01
X-ray binaries are a class of binary systems, in which the accretor is a compact star (i.e., black hole, neutron star, or white dwarf). They are one of the most important objects in the universe, which can be used to study not only binary evolution but also accretion disks and compact stars. Statistical investigations of these binaries help to understand the formation and evolution of galaxies, and sometimes provide useful constraints on the cosmological models. The goal of this thesis is to investigate the formation and evolution processes of X-ray binaries including Be/X-ray binaries, low-mass X-ray binaries (LMXBs), ultraluminous X-ray sources (ULXs), and cataclysmic variables. In Chapter 1 we give a brief review on the basic knowledge of the binary evolution. In Chapter 2 we discuss the formation of Be stars through binary interaction. In this chapter we investigate the formation of Be stars resulting from mass transfer in binaries in the Galaxy. Using binary evolution and population synthesis calculations, we find that in Be/neutron star binaries the Be stars have a lower limit of mass ˜ 8 M⊙ if they are formed by a stable (i.e., without the occurrence of common envelope evolution) and nonconservative mass transfer. We demonstrate that the isolated Be stars may originate from both mergers of two main-sequence stars and disrupted Be binaries during the supernova explosions of the primary stars, but mergers seem to play a much more important role. Finally the fraction of Be stars produced by binary interactions in all B type stars can be as high as ˜ 13%-30% , implying that most of Be stars may result from binary interaction. In Chapter 3 we show the evolution of intermediate- and low-mass X-ray binaries (I/LMXBs) and the formation of millisecond pulsars. Comparing the calculated results with the observations of binary radio pulsars, we report the following results: (1) The allowed parameter space for forming binary pulsars in the initial orbital period-donor mass plane increases with the increasing neutron star mass. This may help to explain why some millisecond pulsars with orbital periods longer than ˜ 60 d seem to have less massive white dwarfs than expected. Alternatively, some of these wide binary pulsars may be formed through mass transfer driven by planet/brown dwarf-involved common envelope evolution; (2) Some of the pulsars in compact binaries might have evolved from intermediate-mass X-ray binaries with an anomalous magnetic braking; (3) The equilibrium spin periods of neutron stars in low-mass X-ray binaries are in general shorter than the observed spin periods of binary pulsars by more than one order of magnitude, suggesting that either the simple equilibrium spin model does not apply, or there are other mechanisms/processes spinning down the neutron stars. In Chapter 4, angular momentum loss mechanisms in the cataclysmic variables below the period gap are presented. By considering several kinds of consequential angular momentum loss mechanisms, we find that neither isotropic wind from the white dwarf nor outflow from the L1 point can explain the extra angular momentum loss rate, while an ouflow from the L2 point or a circumbinary disk can effectively extract the angular momentum provided that ˜ 15%-45% of the transferred mass is lost from the binary. A more promising mechanism is a circumbinary disk exerting a gravitational torque on the binary. In this case the mass loss fraction can be as low as ≲ 10-3. In Chapter 5 we present a study on the population of ultraluminous X-ray sources with an accreting neutron star. Most ULXs are believed to be X-ray binary systems, but previous observational and theoretical studies tend to prefer a black hole rather than a neutron star accretor. The recent discovery of 1.37 s pulsations from the ULX M82 X-2 has established its nature as a magnetized neutron star. In this chapter we model the formation history of neutron star ULXs in an M82- or Milky Way-like galaxy, by use of both binary population synthesis and detailed binary evolution calculations. We find that the birthrate is around 10-4 yr-1 for the incipient X-ray binaries in both cases. We demonstrate the distribution of the ULX population in the donor mass - orbital period plane. Our results suggest that, compared with black hole X-ray binaries, neutron star X-ray binaries may significantly contribute to the ULX population, and high/intermediate-mass X-ray binaries dominate the neutron star ULX population in M82/Milky Way-like galaxies, respectively. In Chapter 6, the population of intermediate- and low-mass X-ray binaries in the Galaxy is explored. We investigate the formation and evolutionary sequences of Galactic intermediate- and low-mass X-ray binaries by combining binary population synthesis (BPS) and detailed stellar evolutionary calculations. Using an updated BPS code we compute the evolution of massive binaries that leads to the formation of incipient I/LMXBs, and present their distribution in the initial donor mass vs. initial orbital period diagram. We then follow the evolution of I/LMXBs until the formation of binary millisecond pulsars (BMSPs). We show that during the evolution of I/LMXBs they are likely to be observed as relatively compact binaries. The resultant BMSPs have orbital periods ranging from about 1 day to a few hundred days. These features are consistent with observations of LMXBs and BMSPs. We also confirm the discrepancies between theoretical predictions and observations mentioned in the literature, that is, the theoretical average mass transfer rates of LMXBs are considerably lower than observed, and the number of BMSPs with orbital periods ˜ 0.1-1 \\unit{d} is severely underestimated. Both imply that something is missing in the modeling of LMXBs, which is likely to be related to the mechanisms of the orbital angular momentum loss. Finally in Chapter 7 we summarize our results and give the prospects for the future work.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
P-Code-Enhanced Encryption-Mode Processing of GPS Signals
NASA Technical Reports Server (NTRS)
Young, Lawrence; Meehan, Thomas; Thomas, Jess B.
2003-01-01
A method of processing signals in a Global Positioning System (GPS) receiver has been invented to enable the receiver to recover some of the information that is otherwise lost when GPS signals are encrypted at the transmitters. The need for this method arises because, at the option of the military, precision GPS code (P-code) is sometimes encrypted by a secret binary code, denoted the A code. Authorized users can recover the full signal with knowledge of the A-code. However, even in the absence of knowledge of the A-code, one can track the encrypted signal by use of an estimate of the A-code. The present invention is a method of making and using such an estimate. In comparison with prior such methods, this method makes it possible to recover more of the lost information and obtain greater accuracy.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Six New Millisecond Pulsars From Arecibo Searches Of Fermi Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cromartie, H. T.; Camilo, F.; Kerr, M.
2016-02-25
We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) 4-year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-white dwarfmore » binary with an 83-day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo’s large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less
SIX NEW MILLISECOND PULSARS FROM ARECIBO SEARCHES OF FERMI GAMMA-RAY SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cromartie, H. T.; Camilo, F.; Kerr, M.
2016-03-01
We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) four year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-whitemore » dwarf binary with an 83 day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo's large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less
State-change in the "transition" binary millisecond pulsar J1023+0038
NASA Astrophysics Data System (ADS)
Stappers, B. W.; Archibald, A.; Bassa, C.; Hessels, J.; Janssen, G.; Kaspi, V.; Lyne, A.; Patruno, A.; Hill, A. B.
2013-10-01
We report a change in the state of PSR J1023+0038, a source which is believed to be transitioning from an X-ray binary to an eclipsing binary radio millisecond pulsar (Archibald et al. 2009, Science, 324, 1411). The system was known to contain an accretion disk in 2001 but has shown no signs of it, or of accretion, since then, rather exhibiting all the properties of an eclipsing binary millisecond radio pulsar (MSP).
Compact X-ray Binary Re-creation in Core Collapse: NGC 6397
NASA Astrophysics Data System (ADS)
Grindlay, J. E.; Bogdanov, S.; van den Berg, M.; Heinke, C.
2005-12-01
We report new Chandra observations of the core collapsed globular cluster NGC 6397. In comparison with our original Chandra observations (Grindlay et al 2001, ApJ, 563, L53), we now detect some 30 sources (vs. 20) in the cluster. A new CV is confirmed, though new HST/ACS optical observations (see Cohn et al this meeting) show that one of the original CV candidates is a background AGN). The 9 CVs (optically identified) yet only one MSP and one qLMXB suggest either a factor of 7 reduction in NSs/WDs vs. what we find in 47Tuc (see Grindlay 2005, Proc. Cefalu Conf. on Interacting Binaries) or that CVs are produced in the core collapse. The possible second MSP with main sequence companion, source U18 (see Grindlay et al 2001) is similar in its X-ray and optical properties to MSP-W in 47Tuc, which must have swapped its binary companion. Together with the one confirmed (radio) MSP in NGC 6397, with an evolved main sequence secondary, the process of enhanced partner swapping in the high stellar density of core collapse is implicated. At the same time, main sequence - main sequence binaries (active binaries) are depleted in the cluster core, presumably by "binary burning" in core collapse. These binary re-creation and destruction mechanisms in core collapse have profound implications for binary evolution and mergers in globulars that have undergone core collapse.
NASA Astrophysics Data System (ADS)
Yunes, Nicolas; Yagi, Kent; Stein, Leo
2016-03-01
Stars can be hairy beasts, especially in theories that go beyond Einstein's. In the latter, a scalar field can be sourced and anchored to a neutron star, and if the later is in a binary system, the scalar field will emit dipole radiation. This radiation removes energy from the binary, forcing the orbit to adiabatically decay much more rapidly than due to the emission of gravitational waves as predicted in General Relativity. The detailed radio observation of binary pulsars has constrained the orbital decay of compact binaries stringently, so much so that theories that predict neutron stars with scalar hair are believed to be essentially ruled out. In this talk I will explain why this ``lore'' is actually incorrect, providing a counter-example in which scalar hair is sourced by neutron stars, yet dipole radiation is absent. I will then describe what binary systems need to be observed to constrain such theories with future astrophysical observations. I acknowledge support from NSF CAREER Grant PHY-1250636.
Thermodynamics Analysis of Binary Plant Generating Power from Low-Temperature Geothermal Resource
NASA Astrophysics Data System (ADS)
Maksuwan, A.
2018-05-01
The purpose in this research was to predict tendency of increase Carnot efficiency of the binary plant generating power from low-temperature geothermal resource. Low-temperature geothermal resources or less, are usually exploited by means of binary-type energy conversion systems. The maximum efficiency is analyzed for electricity production of the binary plant generating power from low-temperature geothermal resource becomes important. By using model of the heat exchanger equivalent to a power plant together with the calculation of the combined heat and power (CHP) generation. The CHP was solved in detail with appropriate boundary originating an idea from the effect of temperature of source fluid inlet-outlet and cooling fluid supply. The Carnot efficiency from the CHP calculation was compared between condition of increase temperature of source fluid inlet-outlet and decrease temperature of cooling fluid supply. Result in this research show that the Carnot efficiency for binary plant generating power from low-temperature geothermal resource has tendency increase by decrease temperature of cooling fluid supply.
The local nanohertz gravitational-wave landscape from supermassive black hole binaries
NASA Astrophysics Data System (ADS)
Mingarelli, Chiara M. F.; Lazio, T. Joseph W.; Sesana, Alberto; Greene, Jenny E.; Ellis, Justin A.; Ma, Chung-Pei; Croft, Steve; Burke-Spolaor, Sarah; Taylor, Stephen R.
2017-12-01
Supermassive black hole binary systems form in galaxy mergers and reside in galactic nuclei with large and poorly constrained concentrations of gas and stars. These systems emit nanohertz gravitational waves that will be detectable by pulsar timing arrays. Here we estimate the properties of the local nanohertz gravitational-wave landscape that includes individual supermassive black hole binaries emitting continuous gravitational waves and the gravitational-wave background that they generate. Using the 2 Micron All-Sky Survey, together with galaxy merger rates from the Illustris simulation project, we find that there are on average 91 ± 7 continuous nanohertz gravitational-wave sources, and 7 ± 2 binaries that will never merge, within 225 Mpc. These local unresolved gravitational-wave sources can generate a departure from an isotropic gravitational-wave background at a level of about 20 per cent, and if the cosmic gravitational-wave background can be successfully isolated, gravitational waves from at least one local supermassive black hole binary could be detected in 10 years with pulsar timing arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Soumya D.; Nayak, Rajesh K.
The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less
Asymmetric distances for binary embeddings.
Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana
2014-01-01
In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.
NASA Astrophysics Data System (ADS)
Hedlund, Anne; Sandquist, Eric L.; Arentoft, Torben; Brogaard, Karsten; Grundahl, Frank; Stello, Dennis; Bedin, Luigi R.; Libralato, Mattia; Malavolta, Luca; Nardiello, Domenico; Molenda-Zakowicz, Joanna; Vanderburg, Andrew
2018-06-01
V1178 Tau is a double-lined spectroscopic eclipsing binary in NGC1817, one of the more massive clusters observed in the K2 mission. We have determined the orbital period (P = 2.20 d) for the first time, and we model radial velocity measurements from the HARPS and ALFOSC spectrographs, light curves collected by Kepler, and ground based light curves using the Eclipsing Light Curve code (ELC, Orosz & Hauschildt 2000). We present masses and radii for the stars in the binary, allowing for a reddening-independent means of determining the cluster age. V1178 Tau is particularly useful for calculating the age of the cluster because the stars are close to the cluster turnoff, providing a more precise age determination. Furthermore, because one of the stars in the binary is a delta Scuti variable, the analysis provides improved insight into their pulsations.
Topology of black hole binary-single interactions
NASA Astrophysics Data System (ADS)
Samsing, Johan; Ilan, Teva
2018-05-01
We present a study on how the outcomes of binary-single interactions involving three black holes (BHs) distribute as a function of the initial conditions; a distribution we refer to as the topology. Using a N-body code that includes BH finite sizes and gravitational wave (GW) emission in the equation of motion (EOM), we perform more than a million binary-single interactions to explore the topology of both the Newtonian limit and the limit at which general relativistic (GR) effects start to become important. From these interactions, we are able to describe exactly under which conditions BH collisions and eccentric GW capture mergers form, as well as how GR in general modifies the Newtonian topology. This study is performed on both large- and microtopological scales. We further describe how the inclusion of GW emission in the EOM naturally leads to scenarios where the binary-single system undergoes two successive GW mergers.
Ffuzz: Towards full system high coverage fuzz testing on binary executables.
Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.
The Effects of Single and Close Binary Evolution on the Stellar Mass Function
NASA Astrophysics Data System (ADS)
Schneider, R. N. F.; Izzard, G. R.; de Mink, S.; Langer, N., Stolte, A., de Koter, A.; Gvaramadze, V. V.; Hussmann, B.; Liermann, A.; Sana, H.
2013-06-01
Massive stars are almost exclusively born in star clusters, where stars in a cluster are expected to be born quasi-simultaneously and with the same chemical composition. The distribution of their birth masses favors lower over higher stellar masses, such that the most massive stars are rare, and the existence of an stellar upper mass limit is still debated. The majority of massive stars are born as members of close binary systems and most of them will exchange mass with a close companion during their lifetime. We explore the influence of single and binary star evolution on the high mass end of the stellar mass function using a rapid binary evolution code. We apply our results to two massive Galactic star clusters and show how the shape of their mass functions can be used to determine cluster ages and comment on the stellar upper mass limit in view of our new findings.
Discovery of the binary nature of SMC X-1 from Uhuru.
NASA Technical Reports Server (NTRS)
Schreier, E.; Giacconi, R.; Gursky, H.; Kellogg, E.; Tananbaum, H.
1972-01-01
The X-ray source in the Small Magellanic Cloud SMC X-1 was observed by Uhuru on numerous occasions from December 1970 through April 1972. As previously reported by Leong et al. (1971), the source was seen to be variable. It was found that SMC X-1 occults with a period of 3.8927 days. The energy spectrum is cut off at low energies and flat. There is no large-amplitude periodic pulsation. The luminosity observed makes the binary source SMC X-1 comparable in strength to both the stronger galactic sources and the discrete sources in the Large Magellanic Cloud.
Simulated Assessment of Interference Effects in Direct Sequence Spread Spectrum (DSSS) QPSK Receiver
2014-03-27
bit error rate BPSK binary phase shift keying CDMA code division multiple access CSI comb spectrum interference CW continuous wave DPSK differential... CDMA ) and GPS systems which is a Gold code. This code is generated by a modulo-2 operation between two different preferred m-sequences. The preferred m...10 SNR Sim (dB) S N R O ut ( dB ) SNR RF SNR DS Figure 3.26: Comparison of input S NRS im and S NROut of the band-pass RF filter (S NRRF) and
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
NASA Astrophysics Data System (ADS)
Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.
2017-11-01
We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Fundamental parameters of massive stars in multiple systems: The cases of HD 17505A and HD 206267A
NASA Astrophysics Data System (ADS)
Raucq, F.; Rauw, G.; Mahy, L.; Simón-Díaz, S.
2018-06-01
Context. Many massive stars are part of binary or higher multiplicity systems. The present work focusses on two higher multiplicity systems: HD 17505A and HD 206267A. Aims: Determining the fundamental parameters of the components of the inner binary of these systems is mandatory to quantify the impact of binary or triple interactions on their evolution. Methods: We analysed high-resolution optical spectra to determine new orbital solutions of the inner binary systems. After subtracting the spectrum of the tertiary component, a spectral disentangling code was applied to reconstruct the individual spectra of the primary and secondary. We then analysed these spectra with the non-LTE model atmosphere code CMFGEN to establish the stellar parameters and the CNO abundances of these stars. Results: The inner binaries of these systems have eccentric orbits with e 0.13 despite their relatively short orbital periods of 8.6 and 3.7 days for HD 17505Aa and HD 206267Aa, respectively. Slight modifications of the CNO abundances are found in both components of each system. The components of HD 17505Aa are both well inside their Roche lobe, whilst the primary of HD 206267Aa nearly fills its Roche lobe around periastron passage. Whilst the rotation of the primary of HD 206267Aa is in pseudo-synchronization with the orbital motion, the secondary displays a rotation rate that is higher. Conclusions: The CNO abundances and properties of HD 17505Aa can be explained by single star evolutionary models accounting for the effects of rotation, suggesting that this system has not yet experienced binary interaction. The properties of HD 206267Aa suggest that some intermittent binary interaction might have taken place during periastron passages, but is apparently not operating anymore. Based on observations collected with the TIGRE telescope (La Luz, Mexico), the 1.93 m telescope at Observatoire de Haute Provence (France), the Nordic Optical Telescope at the Observatorio del Roque de los Muchachos (La Palma, Spain), and the Canada-France-Hawaii telescope (Mauna Kea, Hawaii).
NASA Astrophysics Data System (ADS)
Hillen, M.; Menu, J.; Van Winckel, H.; Min, M.; Gielen, C.; Wevers, T.; Mulders, G. D.; Regibo, S.; Verhoelst, T.
2014-08-01
Context. The presence of stable disks around post-asymptotic giant branch (post-AGB) binaries is a widespread phenomenon. Also, the presence of (molecular) outflows is now commonly inferred in these systems. Aims: In the first paper of this series, a surprisingly large fraction of optical light was found to be resolved in the 89 Her post-AGB binary system. The data showed that this flux arises from close to the central binary. Scattering off the inner rim of the circumbinary disk, or scattering in a dusty outflow were suggested as two possible origins. With detailed dust radiative transfer models of the circumbinary disk, we aim to discriminate between the two proposed configurations. Methods: By including Herschel/SPIRE photometry, we extend the spectral energy distribution (SED) such that it now fully covers UV to sub-mm wavelengths. The MCMax Monte Carlo radiative transfer code is used to create a large grid of disk models. Our models include a self-consistent treatment of dust settling as well as of scattering. A Si-rich composition with two additional opacity sources, metallic Fe or amorphous C, are tested. The SED is fit together with archival mid-IR (MIDI) visibilities, and the optical and near-IR visibilities of Paper I. In this way we constrain the structure of the disk, with a focus on its inner rim. Results: The near-IR visibility data require a smooth inner rim, here obtained with a double power-law parameterization of the radial surface density distribution. A model can be found that fits all of the IR photometric and interferometric data well, with either of the two continuum opacity sources. Our best-fit passive models are characterized by a significant amount of ~mm-sized grains, which are settled to the midplane of the disk. Not a single disk model fits our data at optical wavelengths because of the opposing constraints imposed by the optical and near-IR interferometric data. Conclusions: A geometry in which a passive, dusty, and puffed-up circumbinary disk is present, can reproduce all of the IR, but not the optical observations of 89 Her. Another dusty component (an outflow or halo) therefore needs to be added to the system. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 077.D-0071.
Methodology for fast detection of false sharing in threaded scientific codes
Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang
2014-11-25
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.
NASA Astrophysics Data System (ADS)
Hossein Nouri, Fatemeh; Duez, Matthew D.; Foucart, Francois; Deaton, M. Brett; Haas, Roland; Haddadi, Milad; Kidder, Lawrence E.; Ott, Christian D.; Pfeiffer, Harald P.; Scheel, Mark A.; Szilagyi, Bela
2018-04-01
Black hole-torus systems from compact binary mergers are possible engines for gamma-ray bursts (GRBs). During the early evolution of the postmerger remnant, the state of the torus is determined by a combination of neutrino cooling and magnetically driven heating processes, so realistic models must include both effects. In this paper, we study the postmerger evolution of a magnetized black hole-neutron star binary system using the Spectral Einstein Code (SpEC) from an initial postmerger state provided by previous numerical relativity simulations. We use a finite-temperature nuclear equation of state and incorporate neutrino effects in a leakage approximation. To achieve the needed accuracy, we introduce improvements to SpEC's implementation of general-relativistic magnetohydrodynamics (MHD), including the use of cubed-sphere multipatch grids and an improved method for dealing with supersonic accretion flows where primitive variable recovery is difficult. We find that a seed magnetic field triggers a sustained source of heating, but its thermal effects are largely cancelled by the accretion and spreading of the torus from MHD-related angular momentum transport. The neutrino luminosity peaks at the start of the simulation, and then drops significantly over the first 20 ms but in roughly the same way for magnetized and nonmagnetized disks. The heating rate and disk's luminosity decrease much more slowly thereafter. These features of the evolution are insensitive to grid structure and resolution, formulation of the MHD equations, and seed field strength, although turbulent effects are not fully converged.
2010-01-01
Background The Maximal Pairing Problem (MPP) is the prototype of a class of combinatorial optimization problems that are of considerable interest in bioinformatics: Given an arbitrary phylogenetic tree T and weights ωxy for the paths between any two pairs of leaves (x, y), what is the collection of edge-disjoint paths between pairs of leaves that maximizes the total weight? Special cases of the MPP for binary trees and equal weights have been described previously; algorithms to solve the general MPP are still missing, however. Results We describe a relatively simple dynamic programming algorithm for the special case of binary trees. We then show that the general case of multifurcating trees can be treated by interleaving solutions to certain auxiliary Maximum Weighted Matching problems with an extension of this dynamic programming approach, resulting in an overall polynomial-time solution of complexity (n4 log n) w.r.t. the number n of leaves. The source code of a C implementation can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/Targeting. For binary trees, we furthermore discuss several constrained variants of the MPP as well as a partition function approach to the probabilistic version of the MPP. Conclusions The algorithms introduced here make it possible to solve the MPP also for large trees with high-degree vertices. This has practical relevance in the field of comparative phylogenetics and, for example, in the context of phylogenetic targeting, i.e., data collection with resource limitations. PMID:20525185
NuSTAR Observations of Two New Black Hole X-ray Binary Candidates within 1 pc of Sgr A*
NASA Astrophysics Data System (ADS)
Hord, Benjamin; Hailey, Charles; Mori, Kaya; Mandel, Shifra
2018-01-01
Remarkably, two new X-ray transients were discovered in outburst within ~1 pc of the Galactic Center by the Swift X-ray Telescope in the first half of 2016. A few weeks after each outburst began, NuSTAR ToO observations were triggered for both of the objects. These sources have no known counterparts at other energies. Both objects exhibit relativistically broadened Fe lines in their spectra and possible quasi-periodic oscillations (QPO) in their power spectra, which are features seen in many black hole X-ray binaries. Combined with the fact that there have been no previously observed large outbursts at these positions over the decade of the Swift X-ray Telescope galactic center monitoring campaign, these sources make for prime black hole binary candidates (BHC) rather than neutron star low-mass X-ray binaries (NS-LMXB), which have a known short (<~5 year) recurrence time. We will present 3-79 keV NuSTAR spectra and timing analysis of these sources that supports a black hole binary interpretation over a neutron star scenario. These new BHC, combined with at least one other previously discovered BHC near the Galactic Center, hint at a potentially substantive black hole population in the vicinity of the supermassive black hole at Sgr A*.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV
NASA Astrophysics Data System (ADS)
Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.
2005-06-01
The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.
Assisted stellar suicide in V617 Sagittarii
NASA Astrophysics Data System (ADS)
Steiner, J. E.; Oliveira, A. S.; Cieslinski, D.; Ricci, T. V.
2006-02-01
Context: .V617 Sgr is a V Sagittae star - a group of binaries thought to be the galactic counterparts of the Compact Binary Supersoft X-ray Sources - CBSS. Aims: .To check this hypothesis, we measured the time derivative of its orbital period. Methods: .Observed timings of eclipse minima spanning over 30 000 orbital cycles are presented. Results: .We found that the orbital period evolves quite rapidly: P/dot{P} = 1.1×106 years. This is consistent with the idea that V617 Sgr is a wind driven accretion supersoft source. As the binary system evolves with a time-scale of about one million years, which is extremely short for a low mass evolved binary, it is likely that the system will soon end either by having its secondary completely evaporated or by the primary exploding as a supernova of type Ia. Conclusions: .
Inferences about binary stellar populations using gravitational wave observations
NASA Astrophysics Data System (ADS)
Wysocki, Daniel; Gerosa, Davide; O'Shaughnessy, Richard; Belczynski, Krzysztof; Gladysz, Wojciech; Berti, Emanuele; Kesden, Michael; Holz, Daniel
2018-01-01
With the dawn of gravitational wave astronomy, enabled by the LIGO and Virgo interferometers, we now have a new window into the Universe. In the short time these detectors have been in use, multiple confirmed detections of gravitational waves from compact binary coalescences have been made. Stellar binary systems are one of the likely progenitors of the observed compact binary sources. If this is indeed the case, then we can use measured properties of these binary systems to learn about their progenitors. We will discuss the Bayesian framework in which we make these inferences, and results which include mass and spin distributions.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-07-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-04-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
Binary Lenses in OGLE-III EWS Database. Seasons 2002-2003
NASA Astrophysics Data System (ADS)
Jaroszynski, M.; Udalski, A.; Kubiak, M.; Szymanski, M.; Pietrzynski, G.; Soszynski, I.; Zebrun, K.; Szewczyk, O.; Wyrzykowski, L.
2004-06-01
We present 15 binary lens candidates from OGLE-III Early Warning System database for seasons 2002-2003. We also found 15 events interpreted as single mass lensing of double sources. The candidates were selected by visual light curves inspection. Examining the models of binary lenses of this and our previous study (10 caustic crossing events of OGLE-II seasons 1997--1999) we find one case of extreme mass ratio binary (q approx 0.005) and the rest in the range 0.1
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Photometric Solutions of Three Eclipsing Binary Stars Observed from Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Liu, N.; Fu, J. N.; Zong, W.; Wang, L. Z.; Uddin, S. A.; Zhang, X. B.; Zhang, Y. P.; Cang, T. Q.; Li, G.; Yang, Y.; Yang, G. C.; Mould, J.; Morrell, N.
2018-04-01
Based on spectroscopic observations for the eclipsing binaries CSTAR 036162 and CSTAR 055495 with the WiFeS/2.3 m telescope at SSO and CSTAR 057775 with the Mage/Magellan I at LCO in 2017, stellar parameters are derived. More than 100 nights of almost-continuous light curves reduced from the time-series photometric observations by CSTAR at Dome A of Antarctic in i in 2008 and in g and r in 2009, respectively, are applied to find photometric solutions for the three binaries with the Wilson–Devinney code. The results show that CSTAR 036162 is a detached configuration with the mass ratio q = 0.354 ± 0.0009, while CSTAR 055495 is a semi-detached binary system with the unusual q = 0.946 ± 0.0006, which indicates that CSTAR 055495 may be a rare binary system with mass ratio close to one and the secondary component filling its Roche Lobe. This implies that a mass-ratio reversal has just occurred and CSTAR 055495 is in a rapid mass-transfer stage. Finally, CSTAR 057775 is believed to be an A-type W UMa binary with q = 0.301 ± 0.0008 and a fill-out factor of f = 0.742(8).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.
We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. Themore » resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.« less
Binary Disassembly Block Coverage by Symbolic Execution vs. Recursive Descent
2012-03-01
explores the effectiveness of symbolic execution on packed or obfuscated samples of the same binaries to generate a model-based evaluation of success...24 2.3.4.1 Packing . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.4.2 Techniques...inner workings of UPX (Universal Packer for eXecutables), a common packing tool, on a Windows binary. Image source: GFC08 . . . . . . . . . . . 25 3.1
Chikkagoudar, Satish; Wang, Kai; Li, Mingyao
2011-05-26
Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.
2011-01-01
Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Low-mass X-ray binaries and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Lasota, J. P.; Frank, J.; King, A. R.
1992-01-01
More than twenty years after their discovery, the nature of gamma-ray burst sources (GRBs) remains mysterious. The results from BATSE experiment aboard the Compton Observatory show however that most of the sources of gamma-ray bursts cannot be distributed in the galactic disc. The possibility that a small fraction of sites of gamma-ray bursts is of galactic disc origin cannot however be excluded. We point out that large numbers of neutron-star binaries with orbital periods of 10 hr and M dwarf companions of mass 0.2-0.3 solar mass are a natural result of the evolution of low-mass X-ray binaries (LMXBs). The numbers and physical properties of these systems suggest that some gamma-ray burst sources may be identified with this endpoint of LMXB evolution. We suggest an observational test of this hypothesis.
Computer simulation of radiation damage in gallium arsenide
NASA Technical Reports Server (NTRS)
Stith, John J.; Davenport, James C.; Copeland, Randolph L.
1989-01-01
A version of the binary-collision simulation code MARLOWE was used to study the spatial characteristics of radiation damage in proton and electron irradiated gallium arsenide. Comparisons made with the experimental results proved to be encouraging.
APPLICATION OF GAS DYNAMICAL FRICTION FOR PLANETESIMALS. II. EVOLUTION OF BINARY PLANETESIMALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grishin, Evgeni; Perets, Hagai B.
2016-04-01
One of the first stages of planet formation is the growth of small planetesimals and their accumulation into large planetesimals and planetary embryos. This early stage occurs long before the dispersal of most of the gas from the protoplanetary disk. At this stage gas–planetesimal interactions play a key role in the dynamical evolution of single intermediate-mass planetesimals (m{sub p} ∼ 10{sup 21}–10{sup 25} g) through gas dynamical friction (GDF). A significant fraction of all solar system planetesimals (asteroids and Kuiper-belt objects) are known to be binary planetesimals (BPs). Here, we explore the effects of GDF on the evolution of BPs embedded inmore » a gaseous disk using an N-body code with a fiducial external force accounting for GDF. We find that GDF can induce binary mergers on timescales shorter than the disk lifetime for masses above m{sub p} ≳ 10{sup 22} g at 1 au, independent of the binary initial separation and eccentricity. Such mergers can affect the structure of merger-formed planetesimals, and the GDF-induced binary inspiral can play a role in the evolution of the planetesimal disk. In addition, binaries on eccentric orbits around the star may evolve in the supersonic regime, where the torque reverses and the binary expands, which would enhance the cross section for planetesimal encounters with the binary. Highly inclined binaries with small mass ratios, evolve due to the combined effects of Kozai–Lidov (KL) cycles with GDF which lead to chaotic evolution. Prograde binaries go through semi-regular KL evolution, while retrograde binaries frequently flip their inclination and ∼50% of them are destroyed.« less
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
Mapping the Milky Way Galaxy with LISA
NASA Technical Reports Server (NTRS)
McKinnon, Jose A.; Littenberg, Tyson
2012-01-01
Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way
NASA Astrophysics Data System (ADS)
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-06-01
Mergers of binary black holes on eccentric orbits are among the targets for second-generation ground-based gravitational-wave detectors. These sources may commonly form in galactic nuclei due to gravitational-wave emission during close flyby events of single objects. We determine the distributions of initial orbital parameters for a population of these gravitational-wave sources. Our results show that the initial dimensionless pericenter distance systematically decreases with the binary component masses and the mass of the central supermassive black hole, and its distribution depends sensitively on the highest possible black hole mass in the nuclear star cluster. For a multi-mass black hole population with masses between 5 {M}ȯ and 80 {M}ȯ , we find that between ∼43–69% (68–94%) of 30 {M}ȯ –30 {M}ȯ (10 M ⊙–10 M ⊙) sources have an eccentricity greater than 0.1 when the gravitational-wave signal reaches 10 Hz, but less than ∼10% of the sources with binary component masses less than 30 {M}ȯ remain eccentric at this level near the last stable orbit (LSO). The eccentricity at LSO is typically between 0.005–0.05 for the lower-mass BHs, and 0.1–0.2 for the highest-mass BHs. Thus, due to the limited low-frequency sensitivity, the six currently known quasicircular LIGO/Virgo sources could still be compatible with this originally highly eccentric source population. However, at the design sensitivity of these instruments, the measurement of the eccentricity and mass distribution of merger events may be a useful diagnostic to identify the fraction of GW sources formed in this channel.
NASA Astrophysics Data System (ADS)
Yagi, Kent; Yang, Huan
2018-05-01
The recent discovery of gravitational-wave events has offered us unique test beds of gravity in the strong and dynamical field regime. One possible modification to General Relativity is the gravitational parity violation that arises naturally from quantum gravity. Such parity violation gives rise to the so-called amplitude birefringence in gravitational waves, in which one of the circularly polarized modes is amplified while the other one is suppressed during their propagation. In this paper, we study how well one can measure gravitational parity violation via the amplitude birefringence effect of gravitational waves sourced by stellar-mass black hole binaries. We choose Chern-Simons gravity as an example and work within an effective field theory formalism to ensure that the approximate theory is well posed. We consider gravitational waves from both individual sources and stochastic gravitational-wave backgrounds. Regarding bounds from individual sources, we estimate such bounds using a Fisher analysis and carry out Monte Carlo simulations by randomly distributing sources over their sky location and binary orientation. We find that the bounds on the scalar field evolution in Chern-Simons gravity from the recently discovered gravitational-wave events are too weak to satisfy the weak Chern-Simons approximation, while aLIGO with its design sensitivity can place meaningful bounds. Regarding bounds from stochastic gravitational-wave backgrounds, we set the threshold signal-to-noise ratio for detection of the parity-violation mode as 5 and estimate projected bounds with future detectors assuming that signals are consistent with no parity violation. In an ideal situation in which all the source parameters and binary black hole merger-rate history are known a priori, we find that a network of two third-generation detectors is able to place bounds that are comparable to or slightly stronger than binary pulsar bounds. In a more realistic situation in which one does not have such information beforehand, approximate bounds can be derived if the regular parity-insensitive mode is detected and the peak redshift of the merger-rate history is known theoretically. Since gravitational-wave observations probe either the difference in parity violation between the source and the detector (with individual sources) or the line-of-sight cosmological integration of the scalar field (with gravitational-wave backgrounds), such bounds are complementary to local measurements from solar system experiments and binary pulsar observations.
NASA Astrophysics Data System (ADS)
Calderón Bustillo, Juan; Salemi, Francesco; Dal Canton, Tito; Jani, Karan P.
2018-01-01
The sensitivity of gravitational wave searches for binary black holes is estimated via the injection and posterior recovery of simulated gravitational wave signals in the detector data streams. When a search reports no detections, the estimated sensitivity is then used to place upper limits on the coalescence rate of the target source. In order to obtain correct sensitivity and rate estimates, the injected waveforms must be faithful representations of the real signals. Up to date, however, injected waveforms have neglected radiation modes of order higher than the quadrupole, potentially biasing sensitivity and coalescence rate estimates. In particular, higher-order modes are known to have a large impact in the gravitational waves emitted by intermediate-mass black holes binaries. In this work, we evaluate the impact of this approximation in the context of two search algorithms run by the LIGO Scientific Collaboration in their search for intermediate-mass black hole binaries in the O1 LIGO Science Run data: a matched filter-based pipeline and a coherent unmodeled one. To this end, we estimate the sensitivity of both searches to simulated signals for nonspinning binaries including and omitting higher-order modes. We find that omission of higher-order modes leads to biases in the sensitivity estimates which depend on the masses of the binary, the search algorithm, and the required level of significance for detection. In addition, we compare the sensitivity of the two search algorithms across the studied parameter space. We conclude that the most recent LIGO-Virgo upper limits on the rate of coalescence of intermediate-mass black hole binaries are conservative for the case of highly asymmetric binaries. However, the tightest upper limits, placed for nearly equal-mass sources, remain unchanged due to the small contribution of higher modes to the corresponding sources.
ChelomEx: Isotope-assisted discovery of metal chelates in complex media using high-resolution LC-MS.
Baars, Oliver; Morel, François M M; Perlman, David H
2014-11-18
Chelating agents can control the speciation and reactivity of trace metals in biological, environmental, and laboratory-derived media. A large number of trace metals (including Fe, Cu, Zn, Hg, and others) show characteristic isotopic fingerprints that can be exploited for the discovery of known and unknown organic metal complexes and related chelating ligands in very complex sample matrices using high-resolution liquid chromatography mass spectrometry (LC-MS). However, there is currently no free open-source software available for this purpose. We present a novel software tool, ChelomEx, which identifies isotope pattern-matched chromatographic features associated with metal complexes along with free ligands and other related adducts in high-resolution LC-MS data. High sensitivity and exclusion of false positives are achieved by evaluation of the chromatographic coherence of the isotope pattern within chromatographic features, which we demonstrate through the analysis of bacterial culture media. A built-in graphical user interface and compound library aid in identification and efficient evaluation of results. ChelomEx is implemented in MatLab. The source code, binaries for MS Windows and MAC OS X as well as test LC-MS data are available for download at SourceForge ( http://sourceforge.net/projects/chelomex ).
Making and Testing Hybrid Gravitational Waves from Colliding Black Holes and Neutron Stars
NASA Astrophysics Data System (ADS)
Garcia, Alyssa; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
The Laser Interferometer Gravitational-wave Observatory (LIGO) is a detector that is currently working to observe gravitational waves (GW) from astronomical sources, such as colliding black holes and neutron stars, which are among LIGO's most promising sources. Observing as many waves as possible requires accurate predictions of what the waves look like, which are only possible with numerical simulations. In this poster, I will present results from new simulations of colliding black holes made using the Spectral Einstein Code (SpEC). In particular, I will present results for extending new and existing waveforms and using an open-source library. To construct a waveform that spans the frequency range where LIGO is most sensitive, we combine inexpensive, post-Newtonian approximate waveforms (valid far from merger) and numerical relativity waveforms (valid near the time of merger, when all approximations fail), making a hybrid GW. This work is one part of a new prototype framework for Numerical INJection Analysis with Matter (Matter NINJA). The complete Matter NINJA prototype will test GW search pipelines' abilities to find hybrid waveforms, from simulations containing matter (such as black hole-neutron star binaries), hidden in simulated detector noise.
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
NASA Astrophysics Data System (ADS)
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
Flexible digital modulation and coding synthesis for satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John
1991-01-01
An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.
NASA Astrophysics Data System (ADS)
Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.
2017-01-01
Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.
Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.
2017-01-01
Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129
The helium star donor channel for the progenitors of Type Ia supernovae
NASA Astrophysics Data System (ADS)
Wang, B.; Meng, X.; Chen, X.; Han, Z.
2009-05-01
Type Ia supernovae (SNe Ia) play an important role in astrophysics, especially in the study of cosmic evolution. Several progenitor models for SNe Ia have been proposed in the past. In this paper we carry out a detailed study of the He star donor channel, in which a carbon-oxygen white dwarf (CO WD) accretes material from a He main-sequence star or a He subgiant to increase its mass to the Chandrasekhar mass. Employing Eggleton's stellar evolution code with an optically thick wind assumption, and adopting the prescription of Kato & Hachisu for the mass accumulation efficiency of the He-shell flashes on to the WDs, we performed binary evolution calculations for about 2600 close WD binary systems. According to these calculations, we mapped out the initial parameters for SNe Ia in the orbital period-secondary mass (logPi - Mi2) plane for various WD masses from this channel. The study shows that the He star donor channel is noteworthy for producing SNe Ia (~1.2 × 10-3yr-1 in our Galaxy), and that the progenitors from this channel may appear as supersoft X-ray sources. Importantly, this channel can explain SNe Ia with short delay times (<~108yr), which is consistent with the recent observational implications of young populations of SN Ia progenitors.
Doughnut strikes sandwich: the geometry of hot medium in accreting black hole X-ray binaries
NASA Astrophysics Data System (ADS)
Poutanen, Juri; Veledina, Alexandra; Zdziarski, Andrzej A.
2018-06-01
We study the effects of the mutual interaction of hot plasma and cold medium in black hole binaries in their hard spectral state. We consider a number of different geometries. In contrast to previous theoretical studies, we use a modern energy-conserving code for reflection and reprocessing from cold media. We show that a static corona above an accretion disc extending to the innermost stable circular orbit produces spectra not compatible with those observed. They are either too soft or require a much higher disc ionization than that observed. This conclusion confirms a number of previous findings, but disproves a recent study claiming an agreement of that model with observations. We show that the cold disc has to be truncated in order to agree with the observed spectral hardness. However, a cold disc truncated at a large radius and replaced by a hot flow produces spectra which are too hard if the only source of seed photons for Comptonization is the accretion disc. Our favourable geometry is a truncated disc coexisting with a hot plasma either overlapping with the disc or containing some cold matter within it, also including seed photons arising from cyclo-synchrotron emission of hybrid electrons, i.e. containing both thermal and non-thermal parts.
Binary power multiplier for electromagnetic energy
Farkas, Zoltan D.
1988-01-01
A technique for converting electromagnetic pulses to higher power amplitude and shorter duration, in binary multiples, splits an input pulse into two channels, and subjects the pulses in the two channels to a number of binary pulse compression operations. Each pulse compression operation entails combining the pulses in both input channels and selectively steering the combined power to one output channel during the leading half of the pulses and to the other output channel during the trailing half of the pulses, and then delaying the pulse in the first output channel by an amount equal to half the initial pulse duration. Apparatus for carrying out each of the binary multiplication operation preferably includes a four-port coupler (such as a 3 dB hybrid), which operates on power inputs at a pair of input ports by directing the combined power to either of a pair of output ports, depending on the relative phase of the inputs. Therefore, by appropriately phase coding the pulses prior to any of the pulse compression stages, the entire pulse compression (with associated binary power multiplication) can be carried out solely with passive elements.
Searches for millisecond pulsations in low-mass X-ray binaries, 2
NASA Technical Reports Server (NTRS)
Vaughan, B. A.; Van Der Klis, M.; Wood, K. S.; Norris, J. P.; Hertz, P.; Michelson, P. F.; Paradijs, J. Van; Lewin, W. H. G.; Mitsuda, K.; Penninx, W.
1994-01-01
Coherent millisecond X-ray pulsations are expected from low-mass X-ray binaries (LMXBs), but remain undetected. Using the single-parameter Quadratic Coherence Recovery Technique (QCRT) to correct for unknown binary orbit motion, we have performed Fourier transform searches for coherent oscillations in all long, continuous segments of data obtained at 1 ms time resolution during Ginga observations of LMXB. We have searched the six known Z sources (GX 5-1, Cyg X-2, Sco X-1, GX 17+2, GX 340+0, and GX 349+2), seven of the 14 known atoll sources (GX 3+1. GX 9+1, GX 9+9, 1728-33. 1820-30, 1636-53 and 1608-52), the 'peculiar' source Cir X-1, and the high-mass binary Cyg X-3. We find no evidence for coherent pulsations in any of these sources, with 99% confidence limits on the pulsed fraction between 0.3% and 5.0% at frequencies below the Nyquist frequency of 512 Hz. A key assumption made in determining upper limits in previous searches is shown to be incorrect. We provide a recipe for correctly setting upper limits and detection thresholds. Finally we discuss and apply two strategies to improve sensitivity by utilizing multiple, independent, continuous segments of data with comparable count rates.
A Comparison Between Spectral Properties of ULXs and Luminous X-ray Binaries
NASA Astrophysics Data System (ADS)
Berghea, C. T.; Colbert, E. J. M.; Roberts, T. P.
2004-05-01
What is special about the 1039 erg s-1 limit that is used to define the ULX class? We investigate this question by analyzing Chandra X-ray spectra of 71 X-ray bright point sources from nearby galaxies. Fifty-one of these sources are ULXs (LX(0.3-8.0 keV) ≥ 1039 erg s-1), and 20 sources (our comparison sample) are less-luminous X-ray binaries with LX(0.3-8.0 keV) = 1038-39 erg s-1. Our sample objects were selected from the Chandra archive to have ≥1000 counts and thus represent the highest quality spectra in the Chandra archives for extragalactic X-ray binaries and ULXs. We fit the spectra with one-component models (e.g., cold absorption with power-law, or cold absorption with multi-colored disk blackbody) and two-component models (e.g. absorption with both a power-law and a multi colored disk blackbody). A crude measure of the spectral states of the sources are determined observationally by calibrating the strength of the disk (blackbody) and coronal (power-law) components. These results are then use to determine if spectral properties of the ULXs are statistically distinct from those of the comparison objects, which are assumed to be ``normal'' black-hole X-ray binaries.
The 105-Month Swift-BAT All-Sky Hard X-Ray Survey
NASA Technical Reports Server (NTRS)
Oh, Kyuseok; Koss, Michael; Markwardt, Craig B.; Schawinski, Kevin; Baumgartner, Wayne H.; Barthelmy, Scott D.; Cenko, S. Bradley; Gehrels, Neil; Mushotzky, Richard; Petulante, Abigail;
2018-01-01
We present a catalog of hard X-ray sources detected in the first 105 months of observations with the Burst Alert Telescope (BAT) coded-mask imager on board the Swift observatory. The 105-month Swift-BAT survey is a uniform hard X-ray all-sky survey with a sensitivity of 8.40 x 10(exp -12) erg s(exp -1) cm(exp -2) over 90% of the sky and 7.24 x 10(exp -12) erg s(exp -1) cm(exp -2) over 50% of the sky in the 14-195 keV band. The Swift-BAT 105-month catalog provides 1632 (422 new detections) hard X-ray sources in the 14-195 keV band above the 4.8 sigma significance level. Adding to the previously known hard X-ray sources, 34% (144/422) of the new detections are identified as Seyfert active galactic nuclei (AGNs) in nearby galaxies (z < 0.2). The majority of the remaining identified sources are X-ray binaries (7%, 31) and blazars/BL Lac objects (10%, 43). As part of this new edition of the Swift-BAT catalog, we release eight-channel spectra and monthly sampled light curves for each object in the online journal and at the Swift-BAT 105-month website.
The 105-Month Swift-BAT All-sky Hard X-Ray Survey
NASA Astrophysics Data System (ADS)
Oh, Kyuseok; Koss, Michael; Markwardt, Craig B.; Schawinski, Kevin; Baumgartner, Wayne H.; Barthelmy, Scott D.; Cenko, S. Bradley; Gehrels, Neil; Mushotzky, Richard; Petulante, Abigail; Ricci, Claudio; Lien, Amy; Trakhtenbrot, Benny
2018-03-01
We present a catalog of hard X-ray sources detected in the first 105 months of observations with the Burst Alert Telescope (BAT) coded-mask imager on board the Swift observatory. The 105-month Swift-BAT survey is a uniform hard X-ray all-sky survey with a sensitivity of 8.40× {10}-12 {erg} {{{s}}}-1 {cm}}-2 over 90% of the sky and 7.24× {10}-12 {erg} {{{s}}}-1 {cm}}-2 over 50% of the sky in the 14–195 keV band. The Swift-BAT 105-month catalog provides 1632 (422 new detections) hard X-ray sources in the 14–195 keV band above the 4.8σ significance level. Adding to the previously known hard X-ray sources, 34% (144/422) of the new detections are identified as Seyfert active galactic nuclei (AGNs) in nearby galaxies (z< 0.2). The majority of the remaining identified sources are X-ray binaries (7%, 31) and blazars/BL Lac objects (10%, 43). As part of this new edition of the Swift-BAT catalog, we release eight-channel spectra and monthly sampled light curves for each object in the online journal and at the Swift-BAT 105-month website.
Yu, Kui; Liu, Xiangyang; Zeng, Qun; Yang, Mingli; Ouyang, Jianying; Wang, Xinqin; Tao, Ye
2013-10-11
One thing in common: The formation of binary colloidal semiconductor nanocrystals from single- (M(EEPPh2 )n ) and dual-source precursors (metal carboxylates M(OOCR)n and phosphine chalcogenides such as E=PHPh2 ) is found to proceed through a common mechanism. For CdSe as a model system (31) P NMR spectroscopy and DFT calculations support a reaction mechanism which includes numerous metathesis equilibriums and Se exchange reactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The expected spins of gravitational wave sources with isolated field binary progenitors
NASA Astrophysics Data System (ADS)
Zaldarriaga, Matias; Kushnir, Doron; Kollmeier, Juna A.
2018-01-01
We explore the consequences of dynamical evolution of field binaries composed of a primary black hole (BH) and a Wolf-Rayet (WR) star in the context of gravitational wave (GW) source progenitors. We argue, from general considerations, that the spin of the WR-descendent BH will be maximal in a significant number of cases due to dynamical effects. In other cases, the spin should reflect the natal spin of the primary BH which is currently theoretically unconstrained. We argue that the three currently published LIGO systems (GW150914, GW151226, LVT151012) suggest that this spin is small. The resultant effective spin distribution of gravitational wave sources should thus be bi-model if this classic GW progenitor channel is indeed dominant. While this is consistent with the LIGO detections thus far, it is in contrast to the three best-measured high-mass X-ray binary (HMXB) systems. A comparison of the spin distribution of HMXBs and GW sources should ultimately reveal whether or not these systems arise from similar astrophysical channels.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
BamTools: a C++ API and toolkit for analyzing and managing BAM files
Barnett, Derek W.; Garrison, Erik K.; Quinlan, Aaron R.; Strömberg, Michael P.; Marth, Gabor T.
2011-01-01
Motivation: Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. Results: We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. Availability: BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools. Contact: barnetde@bc.edu PMID:21493652
The Maximum Mass of Rotating Strange Stars
NASA Astrophysics Data System (ADS)
Szkudlarek, M.; Gondek-Rosiń; ska, D.; Villain, L.; Ansorg, M.
2012-12-01
Strange quark stars are considered as a possible alternative to neutron stars as compact objects (e.g. Weber 2003). A hot compact star (a proto-neutron star or a strange star) born in a supernova explosion or a remnant of neutron stars binary merger are expected to rotate differentially and be important sources of gravitational waves. We present results of the first relativistic calculations of differentially rotating strange quark stars for broad ranges of degree of differential rotation and maximum densities. Using a highly accurate, relativistic code we show that rotation may cause a significant increase of maximum allowed mass of strange stars, much larger than in the case of neutron stars with the same degree of differential rotation. Depending on the maximum allowed mass a massive neutron star (strange star) can be temporarily stabilized by differential rotation or collapse to a black hole.
Automating tasks in protein structure determination with the clipper python module.
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M; Hoh, Soon Wen; Jenkins, Huw T; Dodson, Eleanor; Cowtan, Kevin; Agirre, Jon
2018-01-01
Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine-independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo-microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
MICA: Multiple interval-based curve alignment
NASA Astrophysics Data System (ADS)
Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf
2018-01-01
MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA
An object oriented fully 3D tomography visual toolkit.
Agostinelli, S; Paoli, G
2001-04-01
In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.
Tsokaros, Antonios; Ruiz, Milton; Paschalidis, Vasileios; Shapiro, Stuart L; Baiotti, Luca; Uryū, Kōji
2017-06-15
Targets for ground-based gravitational wave interferometers include continuous, quasiperiodic sources of gravitational radiation, such as isolated, spinning neutron stars. In this work, we perform evolution simulations of uniformly rotating, triaxially deformed stars, the compressible analogs in general relativity of incompressible, Newtonian Jacobi ellipsoids. We investigate their stability and gravitational wave emission. We employ five models, both normal and supramassive, and track their evolution with different grid setups and resolutions, as well as with two different evolution codes. We find that all models are dynamically stable and produce a strain that is approximately one-tenth the average value of a merging binary system. We track their secular evolution and find that all our stars evolve toward axisymmetry, maintaining their uniform rotation, rotational kinetic energy, and angular momentum profiles while losing their triaxiality.
On the Existence of t-Identifying Codes in Undirected De Bruijn Networks
2015-08-04
remaining cases remain open. Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 15. SUBJECT TERMS...Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 1 Introduction and Background Let x ∈ V (G), and...we must have d(y, x) = n + 2. In other words, Theorem 2.5 tells us the eccentricity of every node in the graph B(d, n) is n for d ≥ 3, and so the
Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs
2009-12-01
50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE
NASA Technical Reports Server (NTRS)
1976-01-01
Satellite X-ray experiments and ground-based programs aimed at observation of X-ray binaries are discussed. Experiments aboard OAO-3, OSO-8, Ariel 5, Uhuru, and Skylab are included along with rocket and ground-based observations. Major topics covered are: Her X-1, Cyg X-3, Cen X-3, Cyg X-1, the transient source A0620-00, other possible X-ray binaries, and plans and prospects for future observational programs.
Characterizing X-ray Sources in the Rich Open Cluster NGC 7789 Using XMM-Newton
NASA Astrophysics Data System (ADS)
Farner, William; Pooley, David
2018-01-01
It is well established that globular clusters exhibit a correlation between their population of exotic binaries and their rate of stellar encounters, but little work has been done to characterize this relationship in rich open clusters. X-ray observations are the most efficient means to find various types of close binaries, and optical (and radio) identifications can provide secure source classifications. We report on an observation of the rich open cluster NGC 7789 using the XMM-Newton observatory. We present the X-ray and optical imaging data, source lists, and preliminary characterization of the sources based on their X-ray and multiwavelength properties.
HESS J1844-030: A New Gamma-Ray Binary?
NASA Astrophysics Data System (ADS)
McCall, Hannah; Errando, Manel
2018-01-01
Gamma-ray binaries are comprised of a massive, main-sequence star orbiting a neutron star or black hole that generates bright gamma-ray emission. Only six of these systems have been discovered. Here we report on a candidate stellar-binary system associated with the unidentified gamma-ray source HESS J1844-030, whose detection was revealed in the H.E.S.S. galactic plane survey. Analysis of 60 ks of archival Chandra data and over 100 ks of XMM-Newton data reveal a spatially associated X-ray counterpart to this TeV-emitting source (E>1012 eV), CXO J1845-031. The X-ray spectra derived from these exposures yields column density absorption in the range nH = (0.4 - 0.7) x 1022 cm-2, which is below the total galactic value for that part of the sky, indicating that the source is galactic. The flux from CXO J1845-031 increases with a factor of up to 2.5 in a 60 day timescale, providing solid evidence for flux variability at a confidence level exceeding 7 standard deviations. The point-like nature of the source, the flux variability of the nearby X-ray counterpart, and the low column density absorption are all indicative of a binary system. Once confirmed, HESS J1844-030 would represent only the seventh known gamma-ray binary, providing valuable data to advance our understanding of the physics of pulsars and stellar winds and testing high-energy astrophysical processes at timescales not present in other classes of objects.
17 CFR 232.11 - Definition of terms used in part 232.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., PDF, and static graphic files. Such code may be in binary (machine language) or in script form... Act means the Trust Indenture Act of 1939. Unofficial PDF copy. The term unofficial PDF copy means an...
NASA Astrophysics Data System (ADS)
Xia, Bing
Ultrafast optical signal processing, which shares the same fundamental principles of electrical signal processing, can realize numerous important functionalities required in both academic research and industry. Due to the extremely fast processing speed, all-optical signal processing and pulse shaping have been widely used in ultrafast telecommunication networks, photonically-assisted RFlmicro-meter waveform generation, microscopy, biophotonics, and studies on transient and nonlinear properties of atoms and molecules. In this thesis, we investigate two types of optical spectrally-periodic (SP) filters that can be fabricated on planar lightwave circuits (PLC) to perform pulse repetition rate multiplication (PRRM) and arbitrary optical waveform generation (AOWG). First, we present a direct temporal domain approach for PRRM using SP filters. We show that the repetition rate of an input pulse train can be multiplied by a factor N using an optical filter with a free spectral range that does not need to be constrained to an integer multiple of N. Furthermore, the amplitude of each individual output pulse can be manipulated separately to form an arbitrary envelope at the output by optimizing the impulse response of the filter. Next, we use lattice-form Mach-Zehnder interferometers (LF-MZI) to implement the temporal domain approach for PRRM. The simulation results show that PRRM with uniform profiles, binary-code profiles and triangular profiles can be achieved. Three silica based LF-MZIs are designed and fabricated, which incorporate multi-mode interference (MMI) couplers and phase shifters. The experimental results show that 40 GHz pulse trains with a uniform envelope pattern, a binary code pattern "1011" and a binary code pattern "1101" are generated from a 10 GHz input pulse train. Finally, we investigate 2D ring resonator arrays (RRA) for ultraf ast optical signal processing. We design 2D RRAs to generate a pair of pulse trains with different binary-code patterns simultaneously from a single pulse train at a low repetition rate. We also design 2D RRAs for AOWG using the modified direct temporal domain approach. To demonstrate the approach, we provide numerical examples to illustrate the generation of two very different waveforms (square waveform and triangular waveform) from the same hyperbolic secant input pulse train. This powerful technique based on SP filters can be very useful for ultrafast optical signal processing and pulse shaping.
Accretion states in X-ray binaries and their connection to GeV emission
NASA Astrophysics Data System (ADS)
Koerding, Elmar
Accretion onto compact objects is intrinsically a multi-wavelength phenomenon: it shows emis-sion components visible from the radio to GeV bands. In X-ray binaries one can well observe the evolution of a single source under changes of the accretion rate and thus study the interplay between the different emission components.I will introduce the phenomenology of X-ray bina-ries and their accretion states and present our current understanding of the interplay between the optically thin and optically thick part of the accretion flow and the jet.The recent detection of the Fermi Large Area Telescope of a variable high-energy source coinciding with the position of the x-ray binary Cygnus X-3 will be presented. Its identification with Cygnus X-3 has been secured by the detection of its orbital period in gamma rays, as well as the correlation of the LAT flux with radio emission from the relativistic jets of Cygnus X-3. This will be interpreted in the context of the accretion states of the X-ray binary.
Acceleration by pulsar winds in binary systems
NASA Technical Reports Server (NTRS)
Harding, Alice K.; Gaisser, T. K.
1990-01-01
In the absence of accretion torques, a pulsar in a binary system will spin down due to electromagnetic dipole radiation and the spin-down power will drive a wind of relativistic electron-positron pairs. Winds from pulsars with short periods will prevent any subsequent accretion but may be confined by the companion star atmosphere, wind, or magnetosphere to form a standing shock. The authors investigate the possibility of particle acceleration at such a pulsar wind shock and the production of very high energy (VHE) and ultra high energy (UHE) gamma rays from interactions of accelerated protons in the companion star's wind or atmosphere. They find that in close binaries containing active pulsars, protons will be shock accelerated to a maximum energy dependent on the pulsar spin-down luminosity. If a significant fraction of the spin-down power goes into particle acceleration, these systems should be sources of VHE and possibly UHE gamma rays. The authors discuss the application of the pulsar wind model to binary sources such as Cygnus X-3, as well as the possibility of observing VHE gamma-rays from known binary radio pulsar systems.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Optical and X-ray studies of Compact X-ray Binaries in NGC 5904
NASA Astrophysics Data System (ADS)
Bhalotia, Vanshree; Beck-Winchatz, Bernhard
2018-06-01
Due to their high stellar densities, globular cluster systems trigger various dynamical interactions, such as the formation of compact X-ray binaries. Stellar collisional frequencies have been correlated to the number of X-ray sources detected in various clusters and we hope to measure this correlation for NGC 5904. Optical fluxes of sources from archival HST images of NGC 5904 have been measured using a DOLPHOT PSF photometry in the UV, optical and near-infrared. We developed a data analysis pipeline to process the fluxes of tens of thousands of objects using awk, python and DOLPHOT. We plot color magnitude diagrams in different photometric bands in order to identify outliers that could be X-ray binaries, since they do not evolve the same way as singular stars. Aligning previously measured astrometric data for X-ray sources in NGC 5904 from Chandra with archival astrometric data from HST will filter out the outlier objects that are not X-ray producing, and provide a sample of compact binary systems that are responsible for X-ray emission in NGC 5904. Furthermore, previously measured X-ray fluxes of NGC 5904 from Chandra have also been used to measure the X-ray to optical flux ratio and identify the types of compact X-ray binaries responsible for the X-ray emissions in NGC 5904. We gratefully acknowledge the support from the Illinois Space Grant Consortium.
NASA Astrophysics Data System (ADS)
Bilke, Lars; Watanabe, Norihiro; Naumov, Dmitri; Kolditz, Olaf
2016-04-01
A complex software project in general with high standards regarding code quality requires automated tools to help developers in doing repetitive and tedious tasks such as compilation on different platforms and configurations, doing unit testing as well as end-to-end tests and generating distributable binaries and documentation. This is known as continuous integration (CI). A community-driven FOSS-project within the Earth Sciences benefits even more from CI as time and resources regarding software development are often limited. Therefore testing developed code on more than the developers PC is a task which is often neglected and where CI can be the solution. We developed an integrated workflow based on GitHub, Travis and Jenkins for the community project OpenGeoSys - a coupled multiphysics modeling and simulation package - allowing developers to concentrate on implementing new features in a tight feedback loop. Every interested developer/user can create a pull request containing source code modifications on the online collaboration platform GitHub. The modifications are checked (compilation, compiler warnings, memory leaks, undefined behaviors, unit tests, end-to-end tests, analyzing differences in simulation run results between changes etc.) from the CI system which automatically responds to the pull request or by email on success or failure with detailed reports eventually requesting to improve the modifications. Core team developers review the modifications and merge them into the main development line once they satisfy agreed standards. We aim for efficient data structures and algorithms, self-explaining code, comprehensive documentation and high test code coverage. This workflow keeps entry barriers to get involved into the project low and permits an agile development process concentrating on feature additions rather than software maintenance procedures.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.