Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
Maximum aposteriori joint source/channel coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Gibson, Jerry D.
1991-01-01
A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
NASA Astrophysics Data System (ADS)
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2012-01-01
Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
On the optimum signal constellation design for high-speed optical transport networks.
Liu, Tao; Djordjevic, Ivan B
2012-08-27
In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Koval, Oleksiy; Deguillaume, Frederic; Pun, Thierry
2004-06-01
In this paper we address visual communications via printing channels from an information-theoretic point of view as communications with side information. The solution to this problem addresses important aspects of multimedia data processing, security and management, since printed documents are still the most common form of visual information representation. Two practical approaches to side information communications for printed documents are analyzed in the paper. The first approach represents a layered joint source-channel coding for printed documents. This approach is based on a self-embedding concept where information is first encoded assuming a Wyner-Ziv set-up and then embedded into the original data using a Gel'fand-Pinsker construction and taking into account properties of printing channels. The second approach is based on Wyner-Ziv and Berger-Flynn-Gray set-ups and assumes two separated communications channels where an appropriate distributed coding should be elaborated. The first printing channel is considered to be a direct visual channel for images ("analog" channel with degradations). The second "digital channel" with constrained capacity is considered to be an appropriate auxiliary channel. We demonstrate both theoretically and practically how one can benefit from this sort of "distributed paper communications".
Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
Multi-channel feature dictionaries for RGB-D object recognition
NASA Astrophysics Data System (ADS)
Lan, Xiaodong; Li, Qiming; Chong, Mina; Song, Jian; Li, Jun
2018-04-01
Hierarchical matching pursuit (HMP) is a popular feature learning method for RGB-D object recognition. However, the feature representation with only one dictionary for RGB channels in HMP does not capture sufficient visual information. In this paper, we propose multi-channel feature dictionaries based feature learning method for RGB-D object recognition. The process of feature extraction in the proposed method consists of two layers. The K-SVD algorithm is used to learn dictionaries in sparse coding of these two layers. In the first-layer, we obtain features by performing max pooling on sparse codes of pixels in a cell. And the obtained features of cells in a patch are concatenated to generate patch jointly features. Then, patch jointly features in the first-layer are used to learn the dictionary and sparse codes in the second-layer. Finally, spatial pyramid pooling can be applied to the patch jointly features of any layer to generate the final object features in our method. Experimental results show that our method with first or second-layer features can obtain a comparable or better performance than some published state-of-the-art methods.
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
A power-efficient communication system between brain-implantable devices and external computers.
Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui
2007-01-01
In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.
Strategic and Tactical Decision-Making Under Uncertainty
2006-01-03
message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1978-01-01
Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Cognitive Jointly Optimal Code-Division Channelization and Routing Over Cooperative Links
2014-04-01
i List of Figures Fig. 1: Comparison between code-division channelization and FDM. Fig. 2: Secondary receiver SINR as a function of the iteration step...transmission percentage as a function of the number of active links under Cases rank(X′′) = 1 and > 1 (the study includes also the random code assignment...scheme); (b) Instantaneous output SINR of a primary signal against primary SINR-QoS threshold SINRthPU (thick line) and instanta- neous output SINR of
Mutual information of optical communication in phase-conjugating Gaussian channels
NASA Astrophysics Data System (ADS)
Schäfermeier, Clemens; Andersen, Ulrik L.
2018-03-01
In all practical communication channels, the code word consists of Gaussian states and the measurement strategy is often a Gaussian detector such as homodyning or heterodyning. We investigate the communication performance using a phase-conjugated alphabet and joint Gaussian detection in a phase-insensitive amplifying channel. We find that a communication scheme consisting of a phase-conjugating alphabet of coherent states and a joint detection strategy significantly outperforms a standard coherent-state strategy based in individual detection. Moreover, we show that the performance can be further enhanced by using entanglement and that the performance is completely independent of the gain of the phase-insensitively amplifying channel.
Joint Acoustic and Modulation Frequency
NASA Astrophysics Data System (ADS)
Atlas, Les; Shamma, Shihab A.
2003-12-01
There is a considerable evidence that our perception of sound uses important features which is related to underlying signal modulations. This topic has been studied extensively via perceptual experiments, yet there are few, if any, well-developed signal processing methods which capitalize on or model these effects. We begin by summarizing evidence of the importance of modulation representations from psychophysical, physiological, and other sources. The concept of a two-dimensional joint acoustic and modulation frequency representation is proposed. A simple single sinusoidal amplitude modulator of a sinusoidal carrier is then used to illustrate properties of an unconstrained and ideal joint representation. Added constraints are required to remove or reduce undesired interference terms and to provide invertibility. It is then noted that the constraints would also apply to more general and complex cases of broader modulation and carriers. Applications in single-channel speaker separation and in audio coding are used to illustrate the applicability of this joint representation. Other applications in signal analysis and filtering are suggested.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Study of information transfer optimization for communication satellites
NASA Technical Reports Server (NTRS)
Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.
1973-01-01
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Telemetry advances in data compression and channel coding
NASA Technical Reports Server (NTRS)
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
NASA Astrophysics Data System (ADS)
Vu, Thang X.; Duhamel, Pierre; Chatzinotas, Symeon; Ottersten, Bjorn
2017-12-01
This work studies the performance of a cooperative network which consists of two channel-coded sources, multiple relays, and one destination. To achieve high spectral efficiency, we assume that a single time slot is dedicated to relaying. Conventional network-coded-based cooperation (NCC) selects the best relay which uses network coding to serve the two sources simultaneously. The bit error rate (BER) performance of NCC with channel coding, however, is still unknown. In this paper, we firstly study the BER of NCC via a closed-form expression and analytically show that NCC only achieves diversity of order two regardless of the number of available relays and the channel code. Secondly, we propose a novel partial relaying-based cooperation (PARC) scheme to improve the system diversity in the finite signal-to-noise ratio (SNR) regime. In particular, closed-form expressions for the system BER and diversity order of PARC are derived as a function of the operating SNR value and the minimum distance of the channel code. We analytically show that the proposed PARC achieves full (instantaneous) diversity order in the finite SNR regime, given that an appropriate channel code is used. Finally, numerical results verify our analysis and demonstrate a large SNR gain of PARC over NCC in the SNR region of interest.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
Use of color-coded sleeve shutters accelerates oscillograph channel selection
NASA Technical Reports Server (NTRS)
Bouchlas, T.; Bowden, F. W.
1967-01-01
Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.
Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI
NASA Astrophysics Data System (ADS)
Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre
2013-12-01
In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.
Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.
Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi
2018-06-01
Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
End-to-end imaging information rate advantages of various alternative communication systems
NASA Technical Reports Server (NTRS)
Rice, R. F.
1982-01-01
The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
Direct simulation Monte Carlo method for gas flows in micro-channels with bends with added curvature
NASA Astrophysics Data System (ADS)
Tisovský, Tomáš; Vít, Tomáš
Gas flows in micro-channels are simulated using an open source Direct Simulation Monte Carlo (DSMC) code dsmcFOAM for general application to rarefied gas flow written within the framework of the open source C++ toolbox called OpenFOAM. Aim of this paper is to investigate the flow in micro-channel with bend with added curvature. Results are compared with flows in channel without added curvature and equivalent straight channel. Effects of micro-channel bend was already thoroughly investigated by White et al. Geometry proposed by White is also used here for refference.
NASA Astrophysics Data System (ADS)
Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.
2018-05-01
One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
A Wideband Satcom Based Avionics Network with CDMA Uplink and TDM Downlink
NASA Technical Reports Server (NTRS)
Agrawal, D.; Johnson, B. S.; Madhow, U.; Ramchandran, K.; Chun, K. S.
2000-01-01
The purpose of this paper is to describe some key technical ideas behind our vision of a future satcom based digital communication network for avionics applications The key features of our design are as follows: (a) Packetized transmission to permit efficient use of system resources for multimedia traffic; (b) A time division multiplexed (TDM) satellite downlink whose physical layer is designed to operate the satellite link at maximum power efficiency. We show how powerful turbo codes (invented originally for linear modulation) can be used with nonlinear constant envelope modulation, thus permitting the satellite amplifier to operate in a power efficient nonlinear regime; (c) A code division multiple access (CDMA) satellite uplink, which permits efficient access to the satellite from multiple asynchronous users. Closed loop power control is difficult for bursty packetized traffic, especially given the large round trip delay to the satellite. We show how adaptive interference suppression techniques can be used to deal with the ensuing near-far problem; (d) Joint source-channel coding techniques are required both at the physical and the data transport layer to optimize the end-to-end performance. We describe a novel approach to multiple description image encoding at the data transport layer in this paper.
Joint Publication 3-31. Command and Control for Joint Land Operations
2010-06-29
task force] FALCON .” Admiral James Ellis, Commander, Joint Task Force NOBLE ANVIL during Operation ALLIED FORCE in letter correspondence to RAND...beneficial effect on the gr ound cam paign.” D uring t he ca mpaign, “ Army and M arine artillery were used interchangeably.” SOURCE: Lieutenant...consolidates, prioritizes, and forwards ultra -high frequency tactical satellite requirements to the JFC for channel allocation. k. Establishes, supervises
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
NASA Astrophysics Data System (ADS)
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
Schwartz, Mathew; Dixon, Philippe C
2018-01-01
The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM.
Dixon, Philippe C.
2018-01-01
The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM. PMID:29293565
Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication
NASA Astrophysics Data System (ADS)
Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun
2018-07-01
A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
Discrimination of correlated and entangling quantum channels with selective process tomography
Dumitrescu, Eugene; Humble, Travis S.
2016-10-10
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Decoder synchronization for deep space missions
NASA Technical Reports Server (NTRS)
Statman, J. I.; Cheung, K.-M.; Chauvin, T. H.; Rabkin, J.; Belongie, M. L.
1994-01-01
The Consultative Committee for Space Data Standards (CCSDS) recommends that space communication links employ a concatenated, error-correcting, channel-coding system in which the inner code is a convolutional (7,1/2) code and the outer code is a (255,223) Reed-Solomon code. The traditional implementation is to perform the node synchronization for the Viterbi decoder and the frame synchronization for the Reed-Solomon decoder as separate, sequential operations. This article discusses a unified synchronization technique that is required for deep space missions that have data rates and signal-to-noise ratios (SNR's) that are extremely low. This technique combines frame synchronization in the bit and symbol domains and traditional accumulated-metric growth techniques to establish a joint frame and node synchronization. A variation on this technique is used for the Galileo spacecraft on its Jupiter-bound mission.
Performance analysis of simultaneous dense coding protocol under decoherence
NASA Astrophysics Data System (ADS)
Huang, Zhiming; Zhang, Cai; Situ, Haozhen
2017-09-01
The simultaneous dense coding (SDC) protocol is useful in designing quantum protocols. We analyze the performance of the SDC protocol under the influence of noisy quantum channels. Six kinds of paradigmatic Markovian noise along with one kind of non-Markovian noise are considered. The joint success probability of both receivers and the success probabilities of one receiver are calculated for three different locking operators. Some interesting properties have been found, such as invariance and symmetry. Among the three locking operators we consider, the SWAP gate is most resistant to noise and results in the same success probabilities for both receivers.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Motion-related resource allocation in dynamic wireless visual sensor network environments.
Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E
2014-01-01
This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Design of joint source/channel coders
NASA Technical Reports Server (NTRS)
1991-01-01
The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.
Report of Research for the Joint Services Electronics Program
1989-11-01
Erdal Arikan Jean-Pierre Leburton Prithviraj Banerjee Yuen T. Lo I Andrew R. Barron Michael C. Loui Tamer Basar Joseph W. Lyding, Jr. Donna Brown Juraj...Digital Communication Systems SENIOR PRINCIPAL INVESTIGATORS: E. Arikan , Research Assistant Professor (1st reporting period only) B. Hajek, Research...Theory, vol. 36, May 1990 (to appear). (JSEP) [19] E. Arikan , "A coding method for discrete noiseless channels with input constraints," Abstracts I of
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
NASA Astrophysics Data System (ADS)
Nazrul Islam, Mohammed; Karim, Mohammad A.; Vijayan Asari, K.
2013-09-01
Protecting and processing of confidential information, such as personal identification, biometrics, remains a challenging task for further research and development. A new methodology to ensure enhanced security of information in images through the use of encryption and multiplexing is proposed in this paper. We use orthogonal encoding scheme to encode multiple information independently and then combine them together to save storage space and transmission bandwidth. The encoded and multiplexed image is encrypted employing multiple reference-based joint transform correlation. The encryption key is fed into four channels which are relatively phase shifted by different amounts. The input image is introduced to all the channels and then Fourier transformed to obtain joint power spectra (JPS) signals. The resultant JPS signals are again phase-shifted and then combined to form a modified JPS signal which yields the encrypted image after having performed an inverse Fourier transformation. The proposed cryptographic system makes the confidential information absolutely inaccessible to any unauthorized intruder, while allows for the retrieval of the information to the respective authorized recipient without any distortion. The proposed technique is investigated through computer simulations under different practical conditions in order to verify its overall robustness.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
The Impact of Causality on Information-Theoretic Source and Channel Coding Problems
ERIC Educational Resources Information Center
Palaiyanur, Harikrishna R.
2011-01-01
This thesis studies several problems in information theory where the notion of causality comes into play. Causality in information theory refers to the timing of when information is available to parties in a coding system. The first part of the thesis studies the error exponent (or reliability function) for several communication problems over…
PHAZR: A phenomenological code for holeboring in air
NASA Astrophysics Data System (ADS)
Picone, J. M.; Boris, J. P.; Lampe, M.; Kailasanath, K.
1985-09-01
This report describes a new code for studying holeboring by a charged particle beam, laser, or electric discharge in a gas. The coordinates which parameterize the channel are radial displacement (r) from the channel axis and distance (z) along the channel axis from the energy source. The code is primarily phenomenological that is, we use closed solutions of simple models in order to represent many of the effects which are important in holeboring. The numerical simplicity which we gain from the use of these solutions enables us to estimate the structure of channel over long propagation distances while using a minimum of computer time. This feature makes PHAZR a useful code for those studying and designing future systems. Of particular interest is the design and implementation of the subgrid turbulence model required to compute the enhanced channel cooling caused by asymmetry-driven turbulence. The approximate equations of Boris and Picone form the basis of the model which includes the effects of turbulent diffusion and fluid transport on the turbulent field itself as well as on the channel parameters. The primary emphasis here is on charged particle beams, and as an example, we present typical results for an ETA-like beam propagating in air. These calculations demonstrate how PHAZAR may be used to investigate accelerator parameter space and to isolate the important physical parameters which determine the holeboring properties of a given system. The comparison with two-dimensional calculations provide a calibration of the subgrid turbulence model.
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
DVB-S2 Experiment over NASA's Space Network
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.
2017-01-01
The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Integrated source and channel encoded digital communication system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.; Trumpis, B. D.; Udalov, S.
1975-01-01
Various aspects of space shuttle communication systems were studied. The following major areas were investigated: burst error correction for shuttle command channels; performance optimization and design considerations for Costas receivers with and without bandpass limiting; experimental techniques for measuring low level spectral components of microwave signals; and potential modulation and coding techniques for the Ku-band return link. Results are presented.
Source terms, shielding calculations and soil activation for a medical cyclotron.
Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E
2016-12-01
Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .
Numerical simulation of the baking of porous anode carbon in a vertical flue ring furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, M.; Melaaen, M.C.
The interaction of pitch pyrolysis in porous anode carbon during heating and volatiles combustion in the flue gas channel has been analyzed to gain insight in the anode baking process. A two-dimensional geometry of a flue gas channel adjacent to a porous flue gas wall, packing coke, and an anode was used for studying the effect of heating rate on temperature gradients and internal gas pressure in the anodes. The mathematical model included porous heat and mass transfer, pitch pyrolysis, combustion of volatiles, radiation, and turbulent channel flow. The mathematical model was developed through source code modification of the computationalmore » fluid dynamics code FLUENT. The model was useful for studying the effects of heating rate, geometry, and anode properties.« less
A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes
NASA Astrophysics Data System (ADS)
Xu, Yafei; Hong, Sheng; Zhao, Guodong; Zhang, Fengyuan; di, Jinshan; Zhang, Qishan
This paper proposes a direct-sequence UWB Gaussian pulse of cognitive radio systems based on bridge function smart sequence matrix and the Gaussian pulse. As the system uses the spreading sequence code, that is the bridge function smart code sequence, the zero correlation zones (ZCZs) which the bridge function sequences' auto-correlation functions had, could reduce multipath fading of the pulse interference. The Modulated channel signal was sent into the IEEE 802.15.3a UWB channel. We analysis the ZCZs's inhibition to the interference multipath interference (MPI), as one of the main system sources interferences. The simulation in SIMULINK/MATLAB is described in detail. The result shows the system has better performance by comparison with that employing Walsh sequence square matrix, and it was verified by the formula in principle.
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang; Hsu, Yi-Kai
2017-03-01
Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.
1988-05-01
Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice
Telemetry Standards, RCC Standard 106-17. Chapter 26. TmNSDataMessage Transfer Protocol
2017-07-01
Channel (RTSPDataChannel) ............................................ 26-13 26.4.3 Reliability Critical (RC) Delivery Protocol...error status code specified in RFC 2326 for "Request-URI Too Large" is 虮". 26.4.1.5 Request Types RTSPDataSources shall return valid ...to the following requirements. • Valid TmNSDataMessages shall be delivered containing the original Packages matching the requested
Depth encoded three-beam swept source Doppler optical coherence tomography
NASA Astrophysics Data System (ADS)
Wartak, Andreas; Haindl, Richard; Trasischker, Wolfgang; Baumann, Bernhard; Pircher, Michael; Hitzenberger, Christoph K.
2016-03-01
A novel approach for investigation of human retinal and choroidal blood flow by the means of multi-channel swept source Doppler optical coherence tomography (SS-D-OCT) system is being developed. We present preliminary in vitro measurement results for quantification of the 3D velocity vector of scatterers in a flow phantom. The absolute flow velocity of moving scatterers can be obtained without prior knowledge of flow orientation. In contrast to previous spectral domain (SD-) D-OCT investigations, that already proved the three-channel D-OCT approach to be suitable for in vivo retinal blood flow evaluation, this current work aims for a similar functional approach by means of a differing technique. To the best of our knowledge, this is the first three-channel D-OCT setup featuring a wavelength tunable laser source. Furthermore, we present a modification of our setup allowing a reduction of the former three active illumination channels to one active illumination channel and two passive channels, which only probe the illuminated sample. This joint aperture (JA) approach provides the advantage of not having to divide beam power among three beams to meet corresponding laser safety limits. The in vitro measurement results regarding the flow phantom show good agreement between theoretically calculated and experimentally obtained flow velocity values.
Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro
2016-01-25
We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.
Report on GMI Special Study #15: Radio Frequency Interference
NASA Technical Reports Server (NTRS)
Draper, David W.
2015-01-01
This report contains the results of GMI special study #15. An analysis is conducted to identify sources of radio frequency interference (RFI) to the Global Precipitation Measurement (GPM) Microwave Imager (GMI). The RFI impacts the 10 GHz and 18 GHz channels at both polarities. The sources of RFI are identified for the following conditions: over the water (including major inland water bodies) in the earth view, and over land in the earth view, and in the cold sky view. A best effort is made to identify RFI sources in coastal regions, with noted degradation of flagging performance due to the highly variable earth scene over coastal regions. A database is developed of such sources, including latitude, longitude, country and city of earth emitters, and position in geosynchronous orbit for space emitters. A description of the recommended approach for identifying the sources and locations of RFI in the GMI channels is given in this paper. An algorithm to flag RFI contaminated pixels which can be incorporated into the GMI Level 1Base/1B algorithms is defined, which includes Matlab code to perform the necessary flagging of RFI. A Matlab version of the code is delivered with this distribution.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Multi-channel photon counting DOT system based on digital lock-in detection technique
NASA Astrophysics Data System (ADS)
Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng
2011-02-01
Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.
Methodology and Method and Apparatus for Signaling With Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2014-01-01
Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
Channel coding in the space station data system network
NASA Technical Reports Server (NTRS)
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.
NASA Astrophysics Data System (ADS)
Varseev, E.
2017-11-01
The present work is dedicated to verification of numerical model in standard solver of open-source CFD code OpenFOAM for two-phase flow simulation and to determination of so-called “baseline” model parameters. Investigation of heterogeneous coolant flow parameters, which leads to abnormal friction increase of channel in two-phase adiabatic “water-gas” flows with low void fractions, presented.
Transcriptome Analysis of Scorpion Species Belonging to the Vaejovis Genus
Quintero-Hernández, Verónica; Ramírez-Carreto, Santos; Romero-Gutiérrez, María Teresa; Valdez-Velázquez, Laura L.; Becerril, Baltazar; Possani, Lourival D.; Ortiz, Ernesto
2015-01-01
Scorpions belonging to the Buthidae family have traditionally drawn much of the biochemist’s attention due to the strong toxicity of their venoms. Scorpions not toxic to mammals, however, also have complex venoms. They have been shown to be an important source of bioactive peptides, some of them identified as potential drug candidates for the treatment of several emerging diseases and conditions. It is therefore important to characterize the large diversity of components found in the non-Buthidae venoms. As a contribution to this goal, this manuscript reports the construction and characterization of cDNA libraries from four scorpion species belonging to the Vaejovis genus of the Vaejovidae family: Vaejovis mexicanus, V. intrepidus, V. subcristatus and V. punctatus. Some sequences coding for channel-acting toxins were found, as expected, but the main transcribed genes in the glands actively producing venom were those coding for non disulfide-bridged peptides. The ESTs coding for putative channel-acting toxins, corresponded to sodium channel β toxins, to members of the potassium channel-acting α or κ families, and to calcium channel-acting toxins of the calcin family. Transcripts for scorpine-like peptides of two different lengths were found, with some of the species coding for the two kinds. One sequence coding for La1-like peptides, of yet unknown function, was found for each species. Finally, the most abundant transcripts corresponded to peptides belonging to the long chain multifunctional NDBP-2 family and to the short antimicrobials of the NDBP-4 family. This apparent venom composition is in correspondence with the data obtained to date for other non-Buthidae species. Our study constitutes the first approach to the characterization of the venom gland transcriptome for scorpion species belonging to the Vaejovidae family. PMID:25659089
Transcriptome analysis of scorpion species belonging to the Vaejovis genus.
Quintero-Hernández, Verónica; Ramírez-Carreto, Santos; Romero-Gutiérrez, María Teresa; Valdez-Velázquez, Laura L; Becerril, Baltazar; Possani, Lourival D; Ortiz, Ernesto
2015-01-01
Scorpions belonging to the Buthidae family have traditionally drawn much of the biochemist's attention due to the strong toxicity of their venoms. Scorpions not toxic to mammals, however, also have complex venoms. They have been shown to be an important source of bioactive peptides, some of them identified as potential drug candidates for the treatment of several emerging diseases and conditions. It is therefore important to characterize the large diversity of components found in the non-Buthidae venoms. As a contribution to this goal, this manuscript reports the construction and characterization of cDNA libraries from four scorpion species belonging to the Vaejovis genus of the Vaejovidae family: Vaejovis mexicanus, V. intrepidus, V. subcristatus and V. punctatus. Some sequences coding for channel-acting toxins were found, as expected, but the main transcribed genes in the glands actively producing venom were those coding for non disulfide-bridged peptides. The ESTs coding for putative channel-acting toxins, corresponded to sodium channel β toxins, to members of the potassium channel-acting α or κ families, and to calcium channel-acting toxins of the calcin family. Transcripts for scorpine-like peptides of two different lengths were found, with some of the species coding for the two kinds. One sequence coding for La1-like peptides, of yet unknown function, was found for each species. Finally, the most abundant transcripts corresponded to peptides belonging to the long chain multifunctional NDBP-2 family and to the short antimicrobials of the NDBP-4 family. This apparent venom composition is in correspondence with the data obtained to date for other non-Buthidae species. Our study constitutes the first approach to the characterization of the venom gland transcriptome for scorpion species belonging to the Vaejovidae family.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
Integrated source and channel encoded digital communications system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1974-01-01
Studies on the digital communication system for the direct communication links from ground to space shuttle and the links involving the Tracking and Data Relay Satellite (TDRS). Three main tasks were performed:(1) Channel encoding/decoding parameter optimization for forward and reverse TDRS links,(2)integration of command encoding/decoding and channel encoding/decoding; and (3) modulation coding interface study. The general communication environment is presented to provide the necessary background for the tasks and to provide an understanding of the implications of the results of the studies.
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
Laser beam coupling with capillary discharge plasma for laser wakefield acceleration applications
NASA Astrophysics Data System (ADS)
Bagdasarov, G. A.; Sasorov, P. V.; Gasilov, V. A.; Boldarev, A. S.; Olkhovskaya, O. G.; Benedetti, C.; Bulanov, S. S.; Gonsalves, A.; Mao, H.-S.; Schroeder, C. B.; van Tilborg, J.; Esarey, E.; Leemans, W. P.; Levato, T.; Margarone, D.; Korn, G.
2017-08-01
One of the most robust methods, demonstrated to date, of accelerating electron beams by laser-plasma sources is the utilization of plasma channels generated by the capillary discharges. Although the spatial structure of the installation is simple in principle, there may be some important effects caused by the open ends of the capillary, by the supplying channels etc., which require a detailed 3D modeling of the processes. In the present work, such simulations are performed using the code MARPLE. First, the process of capillary filling with cold hydrogen before the discharge is fired, through the side supply channels is simulated. Second, the simulation of the capillary discharge is performed with the goal to obtain a time-dependent spatial distribution of the electron density near the open ends of the capillary as well as inside the capillary. Finally, to evaluate the effectiveness of the beam coupling with the channeling plasma wave guide and of the electron acceleration, modeling of the laser-plasma interaction was performed with the code INF&RNO.
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
NASA Technical Reports Server (NTRS)
Shambayati, Shervin
2001-01-01
In order to evaluate performance of strong channel codes in presence of imperfect carrier phase tracking for residual carrier BPSK modulation in this paper an approximate 'brick wall' model is developed which is independent of the channel code type for high data rates. It is shown that this approximation is reasonably accurate (less than 0.7dB for low FERs for (1784,1/6) code and less than 0.35dB for low FERs for (5920,1/6) code). Based on the approximation's accuracy, it is concluded that the effects of imperfect carrier tracking are more or less independent of the channel code type for strong channel codes. Therefore, the advantage that one strong channel code has over another with perfect carrier tracking translates to nearly the same advantage under imperfect carrier tracking conditions. This will allow the link designers to incorporate projected channel code performance of strong channel codes into their design tables without worrying about their behavior in the face of imperfect carrier phase tracking.
The DDN (Defense Data Network) Course,
1986-04-01
devices will share the same node-to-node channels. * Simultaneous availability of source and destination is not required. * Speed and code conversion can...address multiple addresses simultaneously 3) Disadvantages of Message Switching Systems Not suited to real time or interactive use * Long and highly...transmission b) Unlike message switching, packet switching requires the -. simultaneous availability of source and destination. 64 -4 ) ..xa...e s
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry
2014-05-29
its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the
2014-06-06
National Marine Fisheries Service Habitat Conservation District 904 South Morris Street Oxford, MD 21654 Dear Mr. Goodger: The U.S. Army...Nadal, Teresita I NAO From: Nadal, Teresita I NAO Sent: Tuesday , February 04, 2014 2:10 PM To: ’David.L.O’Brien@noaa.gov’ Subject: Skiffes Creek EFH...Noaa.gov). Sincerely, DanielS. Morris Acting Regional Administrator File Code: H:\\S7ST1Section 7\\Non-Fisheries\\ACOE\\Infonnal\\20 I2\\Norfolk District
Classification and simulation of stereoscopic artifacts in mobile 3DTV content
NASA Astrophysics Data System (ADS)
Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen
2009-02-01
We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.
Reliable video transmission over fading channels via channel state estimation
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-04-01
Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.
Telidon Videotex presentation level protocol: Augmented picture description instructions
NASA Astrophysics Data System (ADS)
Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.
1982-02-01
The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.
Separable concatenated codes with iterative map decoding for Rician fading channels
NASA Technical Reports Server (NTRS)
Lodge, J. H.; Young, R. J.
1993-01-01
Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things
Akan, Ozgur B.
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
NASA Astrophysics Data System (ADS)
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes
NASA Astrophysics Data System (ADS)
Su, Hualing; He, Yucheng; Zhou, Lin
2017-08-01
In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.
Optimal superdense coding over memory channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadman, Z.; Kampermann, H.; Bruss, D.
2011-10-15
We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene; Humble, Travis S.
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
JDFTx: Software for joint density-functional theory
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...
2017-11-14
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
JDFTx: Software for joint density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
26 CFR 1.883-0 - Outline of major topics.
Code of Federal Regulations, 2011 CFR
2011-04-01
... agreement, code-sharing arrangement or other joint venture. (3) Activities not considered operation of ships or aircraft. (4) Examples. (5) Definitions. (i) Bareboat charter. (ii) Code-sharing arrangement. (iii..., partnership, strategic alliance, joint operating agreement, code-sharing arrangement or other joint venture...
26 CFR 1.883-0 - Outline of major topics.
Code of Federal Regulations, 2013 CFR
2013-04-01
... agreement, code-sharing arrangement or other joint venture. (3) Activities not considered operation of ships or aircraft. (4) Examples. (5) Definitions. (i) Bareboat charter. (ii) Code-sharing arrangement. (iii..., partnership, strategic alliance, joint operating agreement, code-sharing arrangement or other joint venture...
Gely, P; Drouin, G; Thiry, P S; Tremblay, G R
1984-11-01
A new composite prosthesis was recently proposed for the anterior cruciate ligament. It is implanted in the femur and the tibia through two anchoring channels. Its intra-articular portion, composed of a fiber mesh sheath wrapped around a silicone rubber cylindrical core, reproduces satisfactorily the ligament response in tension. However, the prosthesis does not only undergo elongation. In addition, it is submitted to torsion in its intra-articular portion and bending at its ends. This paper presents a new method to evaluate these two types of deformations throughout a knee flexion by means of a geometric model of the implanted prosthesis. Input data originate from two sources: (i) a three-dimensional anatomic topology of the knee joint in full extension, providing the localization of the prosthesis anchoring channels, and ii) a kinematic model of the knee describing the motion of these anchoring channels during a physiological flexion of the knee joint. The evaluation method is independent of the way input data are obtained. This method, applied to a right cadaveric knee, shows that the orientation of the anchoring channels has a large effect on the extent of torsion and bending applied to the implanted prosthesis throughout a knee flexion, especially on the femoral side. The study suggests also the best choice for the anchoring channel axes orientation.
Capacity, cutoff rate, and coding for a direct-detection optical channel
NASA Technical Reports Server (NTRS)
Massey, J. L.
1980-01-01
It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.
Bidirectional tornado modes on the Joint European Torus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandquist, P.; Sharapov, S. E.; Lisak, M.
In discharges on the Joint European Torus [P. H. Rebut and B. E. Keen, Fusion Technol. 11, 13 (1987)] with safety factor q(0)<1 and high-power ion cyclotron resonance heating (ICRH), monster sawtooth crashes are preceded by frequency sweeping 'tornado modes' in the toroidal Alfven eigenmode frequency range. A suite of equilibrium and spectral magnetohydrodynamical codes is used for explaining the observed evolution of the tornado mode frequency and for identifying temporal evolution of the safety factor inside the q=1 radius just before sawtooth crashes. In some cases, the tornado modes are observed simultaneously with both positive and negative toroidal modemore » numbers. Hence, a free energy source other than the radial gradient of the energetic ion pressure exciting these modes is sought. The distribution function of the ICRH-accelerated ions is assessed with the SELFO code [J. Hedin et al., Nucl. Fusion 42, 527 (2002)] and energetic particle drive due to the velocity space anisotropy of ICRH-accelerated ions is considered analytically as the possible source for excitation of bidirectional tornado modes.« less
Integrated source and channel encoded digital communication system design study. [for space shuttles
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The results of several studies Space Shuttle communication system are summarized. These tasks can be divided into the following categories: (1) phase multiplexing for two- and three-channel data transmission, (2) effects of phase noise on the performance of coherent communication links, (3) analysis of command system performance, (4) error correcting code tradeoffs, (5) signal detection and angular search procedure for the shuttle Ku-band communication system, and (6) false lock performance of Costas loop receivers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
2017-04-06
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong
2010-12-20
In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.
NASA Astrophysics Data System (ADS)
Wang, Yupeng; Chang, Kyunghi
In this paper, we analyze the coexistence issues of M-WiMAX TDD and WCDMA FDD systems. Smart antenna techniques are applied to mitigate the performance loss induced by adjacent channel interference (ACI) in the scenarios where performance is heavily degraded. In addition, an ACI model is proposed to capture the effect of transmit beamforming at the M-WiMAX base station. Furthermore, a MCS-based throughput analysis is proposed, to jointly consider the effects of ACI, system packet error rate requirement, and the available modulation and coding schemes, which is not possible by using the conventional Shannon equation based analysis. From the results, we find that the proposed MCS-based analysis method is quite suitable to analyze the system theoretical throughput in a practical manner.
Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee
2016-11-20
Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.
Klempova, Bibiana; Liepelt, Roman
2016-07-01
Recent findings suggest that a Simon effect (SE) can be induced in Individual go/nogo tasks when responding next to an event-producing object salient enough to provide a reference for the spatial coding of one's own action. However, there is skepticism against referential coding for the joint Simon effect (JSE) by proponents of task co-representation. In the present study, we tested assumptions of task co-representation and referential coding by introducing unexpected double response events in a joint go/nogo and a joint independent go/nogo task. In Experiment 1b, we tested if task representations are functionally similar in joint and standard Simon tasks. In Experiment 2, we tested sequential updating of task co-representation after unexpected single response events in the joint independent go/nogo task. Results showed increased JSEs following unexpected events in the joint go/nogo and joint independent go/nogo task (Experiment 1a). While the former finding is in line with the assumptions made by both accounts (task co-representation and referential coding), the latter finding supports referential coding. In contrast to Experiment 1a, we found a decreased SE after unexpected events in the standard Simon task (Experiment 1b), providing evidence against the functional equivalence assumption between joint and two-choice Simon tasks of the task co-representation account. Finally, we found an increased JSE also following unexpected single response events (Experiment 2), ruling out that the findings of the joint independent go/nogo task in Experiment 1a were due to a re-conceptualization of the task situation. In conclusion, our findings support referential coding also for the joint Simon effect.
Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.
2015-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels
NASA Technical Reports Server (NTRS)
Moher, Michael L.; Lodge, John H.
1990-01-01
A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.
Telemetry: Summary of concept and rationale
NASA Astrophysics Data System (ADS)
1987-12-01
This report presents the concept and supporting rationale for the telemetry system developed by the Consultative Committee for Space Data Systems (CCSDS). The concepts, protocols and data formats developed for the telemetry system are designed for flight and ground data systems supporting conventional, contemporary free-flyer spacecraft. Data formats are designed with efficiency as a primary consideration, i.e., format overhead is minimized. The results reflect the consensus of experts from many space agencies. An overview of the CCSDS telemetry system introduces the notion of architectural layering to achieve transparent and reliable delivery of scientific and engineering sensor data (generated aboard space vehicles) to users located in space or on earth. The system is broken down into two major conceptual categories: a packet telemetry concept and a telemetry channel coding concept. Packet telemetry facilitates data transmission from source to user in a standardized and highly automated manner. It provides a mechanism for implementing common data structures and protocols which can enhance the development and operation of space mission systems. Telemetry channel coding is a method by which data can be sent from a source to a destination by processing it in such a way that distinct messages are created which are easily distinguishable from one another. This allows construction of the data with low error probability, thus improving performance of the channel.
NASA Astrophysics Data System (ADS)
Roa, Luis; Ladrón de Guevara, María L.; Soto-Moscoso, Matias; Catalán, Pamela
2018-05-01
In our work we consider the following problem in the context of teleportation: an unknown pure state has to be teleported and there are two laboratories which can perform the task. One laboratory uses a pure non-maximally entangled channel but has a capability of performing the joint measurement on bases with a constrained degree of entanglement; the other lab makes use of a mixed X-state channel but can perform a joint measurement on bases with higher entanglement degrees. We compare the average teleportation fidelity achieved in both cases, finding that the fidelity achieved with the X-state can surpass the obtained with a pure channel, even though the X-state is less entangled than the latter. We find the conditions under which this effect occurs. Our results evidence that the entanglement of the joint measurement plays a role as important as the entanglement of the channel in order to optimize the teleportation process. We include an example showing that the average fidelity of teleportation obtained with a Werner state channel can be greater than that obtained with a Bell state channel.
A Comparative Study of Co-Channel Interference Suppression Techniques
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas
1997-01-01
We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Feedback power control strategies in wireless sensor networks with joint channel decoding.
Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio
2009-01-01
In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as "balanced SNR" and "unbalanced SNR," respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm.
2014-09-30
underwater acoustic communication technologies for autonomous distributed underwater networks , through innovative signal processing, coding, and...4. TITLE AND SUBTITLE Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and...coding: 3) OFDM modulated dynamic coded cooperation in underwater acoustic channels; 3 Localization, Networking , and Testbed: 4) On-demand
NASA Technical Reports Server (NTRS)
1975-01-01
Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Joint Patch and Multi-label Learning for Facial Action Unit Detection
Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang
2016-01-01
The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
Forty Years of Research and Development at Griffiss Air Force Base, June 1951-June 1991
1991-06-01
joints to transfer a number of power sources from the stationary base to the rotating antenna in order to develop high power, multi -beam, long-range...2757. 1 2a. DISTFIBUIOWAVALABLUY STATEMENT 12. DISTILUION CODE Approved for public release; distribution unlimited. a3 ABSTRACT *-= in This historical...did not lend itself to the use of footnotes and a formal bibliography, so a brief note on the primary sources is in order here. The bulk of the
Trellis coding techniques for mobile communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Jedrey, T.
1988-01-01
A criterion for designing optimum trellis codes to be used over fading channels is given. A technique is shown for reducing certain multiple trellis codes, optimally designed for the fading channel, to conventional (i.e., multiplicity one) trellis codes. The computational cutoff rate R0 is evaluated for MPSK transmitted over fading channels. Examples of trellis codes optimally designed for the Rayleigh fading channel are given and compared with respect to R0. Two types of modulation/demodulation techniques are considered, namely coherent (using pilot tone-aided carrier recovery) and differentially coherent with Doppler frequency correction. Simulation results are given for end-to-end performance of two trellis-coded systems.
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
78 FR 49242 - Relief From Joint and Several Liability
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-13
... Relief From Joint and Several Liability AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice... joint and several tax liability under section 6015 of the Internal Revenue Code (Code) and relief from... are husband and wife to file a joint Federal income tax return. Married individuals who choose to file...
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
An improved control mode for the ping-pong protocol operation in imperfect quantum channels
NASA Astrophysics Data System (ADS)
Zawadzki, Piotr
2015-07-01
Quantum direct communication (QDC) can bring confidentiality of sensitive information without any encryption. A ping-pong protocol, a well-known example of entanglement-based QDC, offers asymptotic security in a perfect quantum channel. However, it has been shown (Wójcik in Phys Rev Lett 90(15):157901, 2003. doi:10.1103/PhysRevLett.90.157901) that it is not secure in the presence of losses. Moreover, legitimate parities cannot rely on dense information coding due to possible undetectable eavesdropping even in the perfect setting (Pavičić in Phys Rev A 87(4):042326, 2013. doi:10.1103/PhysRevA.87.042326). We have identified the source of the above-mentioned weaknesses in the incomplete check of the EPR pair coherence. We propose an improved version of the control mode, and we discuss its relation to the already-known attacks that undermine the QDC security. It follows that the new control mode detects these attacks with high probability and independently on a quantum channel type. As a result, an asymptotic security of the QDC communication can be maintained for imperfect quantum channels, also in the regime of dense information coding.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Fast QC-LDPC code for free space optical communication
NASA Astrophysics Data System (ADS)
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI
Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.
2018-01-01
Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611
A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.
Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel
2004-06-21
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
Interfacing Computer Aided Parallelization and Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)
2003-01-01
When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.
A TDM link with channel coding and digital voice.
NASA Technical Reports Server (NTRS)
Jones, M. W.; Tu, K.; Harton, P. L.
1972-01-01
The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.
Feedback Power Control Strategies in Wireless Sensor Networks with Joint Channel Decoding
Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio
2009-01-01
In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as “balanced SNR” and “unbalanced SNR,” respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm. PMID:22291536
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
... as ``flared joint.'' The flared joint, once made fast, permits brake fluid to flow through channels...), or push on ends (PO), or flanged ends and produced to the American Water Works Association (AWWA... flared joint, once made fast, permits brake fluid to flow through channels that never exceed 3.8...
Modeling Anisotropic Elastic Wave Propagation in Jointed Rock Masses
NASA Astrophysics Data System (ADS)
Hurley, R.; Vorobiev, O.; Ezzedine, S. M.; Antoun, T.
2016-12-01
We present a numerical approach for determining the anisotropic stiffness of materials with nonlinearly-compliant joints capable of sliding. The proposed method extends existing ones for upscaling the behavior of a medium with open cracks and inclusions to cases relevant to natural fractured and jointed rocks, where nonlinearly-compliant joints can undergo plastic slip. The method deviates from existing techniques by incorporating the friction and closure states of the joints, and recovers an anisotropic elastic form in the small-strain limit when joints are not sliding. We present the mathematical formulation of our method and use Representative Volume Element (RVE) simulations to evaluate its accuracy for joint sets with varying complexity. We then apply the formulation to determine anisotropic elastic constants of jointed granite found at the Nevada Nuclear Security Site (NNSS) where the Source Physics Experiments (SPE), a campaign of underground chemical explosions, are performed. Finally, we discuss the implementation of our numerical approach in a massively parallel Lagrangian code Geodyn-L and its use for studying wave propagation from underground explosions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2004-12-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2005-01-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Apparatus and method for routing a transmission line through a downhole tool
Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Briscoe, Michael; Reynolds, Jay
2006-07-04
A method for routing a transmission line through a tool joint having a primary and secondary shoulder, a central bore, and a longitudinal axis, includes drilling a straight channel, at a positive, nominal angle with respect to the longitudinal axis, through the tool joint from the secondary shoulder to a point proximate the inside wall of the centtral bore. The method further includes milling back, from within the central bore, a second channel to merge with the straight channel, thereby forming a continuous channel from the secondary shoulder to the central bore. In selected embodiments, drilling is accomplished by gun-drilling the straight channel. In other embodiments, the method includes tilting the tool joint before drilling to produce the positive, nominal angle. In selected embodiments, the positive, nominal angle is less than or equal to 15 degrees.
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.
2017-12-01
This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
Worldwide survey of direct-to-listener digital audio delivery systems development since WARC-1992
NASA Technical Reports Server (NTRS)
Messer, Dion D.
1993-01-01
Each country was allocated frequency band(s) for direct-to-listener digital audio broadcasting at WARC-92. These allocations were near 1500, 2300, and 2600 MHz. In addition, some countries are encouraging the development of digital audio broadcasting services for terrestrial delivery only in the VHF bands (at frequencies from roughly 50 to 300 MHz) and in the medium-wave broadcasting band (AM band) (from roughly 0.5 to 1.7 MHz). The development activity increase was explosive. Current development, as of February 1993, as it is known to the author is summarized. The information given includes the following characteristics, as appropriate, for each planned system: coverage areas, audio quality, number of audio channels, delivery via satellite/terrestrial or both, carrier frequency bands, modulation methods, source coding, and channel coding. Most proponents claim that they will be operational in 3 or 4 years.
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-08-08
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.
Code of Fair Testing Practices in Education (Revised)
ERIC Educational Resources Information Center
Educational Measurement: Issues and Practice, 2005
2005-01-01
A note from the Working Group of the Joint Committee on Testing Practices: The "Code of Fair Testing Practices in Education (Code)" prepared by the Joint Committee on Testing Practices (JCTP) has just been revised for the first time since its initial introduction in 1988. The revision of the Code was inspired primarily by the revision of…
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
75 FR 42719 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
...: Commander, Navy Expeditionary Combat Command, 1575 Gator Blvd, Joint Expeditionary Base Little Creek... Expeditionary Combat Command, Code (N8), 1575 Gator Blvd, Joint Expeditionary Base Little Creek, Virginia Beach... to the Commander, Navy Expeditionary Combat Command, Code (N8), 1575 Gator Blvd, Joint Expeditionary...
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
Moderate Deviation Analysis for Classical Communication over Quantum Channels
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
Reliable quantum communication over a quantum relay channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu; Imre, Sandor
2014-12-04
We show that reliable quantum communication over an unreliable quantum relay channels is possible. The coding scheme combines the results on the superadditivity of quantum channels and the efficient quantum coding approaches.
Scan Line Difference Compression Algorithm Simulation Study.
1985-08-01
introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Joint digital signal processing for superchannel coherent optical communication systems.
Liu, Cheng; Pan, Jie; Detwiler, Thomas; Stark, Andrew; Hsueh, Yu-Ting; Chang, Gee-Kung; Ralph, Stephen E
2013-04-08
Ultra-high-speed optical communication systems which can support ≥ 1Tb/s per channel transmission will soon be required to meet the increasing capacity demand. However, 1Tb/s over a single carrier requires either or both a high-level modulation format (i.e. 1024QAM) and a high baud rate. Alternatively, grouping a number of tightly spaced "sub-carriers" to form a terabit superchannel increases channel capacity while minimizing the need for high-level modulation formats and high baud rate, which may allow existing formats, baud rate and components to be exploited. In ideal Nyquist-WDM superchannel systems, optical subcarriers with rectangular spectra are tightly packed at a channel spacing equal to the baud rate, thus achieving the Nyquist bandwidth limit. However, in practical Nyquist-WDM systems, precise electrical or optical control of channel spectra is required to avoid strong inter-channel interference (ICI). Here, we propose and demonstrate a new "super receiver" architecture for practical Nyquist-WDM systems, which jointly detects and demodulates multiple channels simultaneously and mitigates the penalties associated with the limitations of generating ideal Nyquist-WDM spectra. Our receiver-side solution relaxes the filter requirements imposed on the transmitter. Two joint DSP algorithms are developed for linear ICI cancellation and joint carrier-phase recovery. Improved system performance is observed with both experimental and simulation data. Performance analysis under different system configurations is conducted to demonstrate the feasibility and robustness of the proposed joint DSP algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
Formal Models of Hardware and Their Applications to VLSI Design Automation.
1986-12-24
ORGANIZATION Universitv of Southern’iaplcbe ralifnrni Offico of ’,aval "esearch 6c. ADDRESS (City. State and ZIP Code) 7b. ADDRESS (City. Stote and ZIP Code...Di’f-i2C-33-K-O147 8.ADESS IXity, State and ZIP Coda, 10 SOURCE OF FUNDING NODS US fr-," esearch C-f-ice PORM POET TS OKUI 2..Fc 2~1ELEMENT No NO. NO...are classified as belonging to one of six different types. The dimensions of the routing channel are defined as functions of these random variables
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE
NASA Astrophysics Data System (ADS)
Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel
2004-06-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
BCM-2.0 - The new version of computer code ;Basic Channeling with Mathematica©;
NASA Astrophysics Data System (ADS)
Abdrashitov, S. V.; Bogdanov, O. V.; Korotchenko, K. B.; Pivovarov, Yu. L.; Rozhkova, E. I.; Tukhfatullin, T. A.; Eikhorn, Yu. L.
2017-07-01
The new symbolic-numerical code devoted to investigation of the channeling phenomena in periodic potential of a crystal has been developed. The code has been written in Wolfram Language taking advantage of analytical programming method. Newly developed different packages were successfully applied to simulate scattering, radiation, electron-positron pair production and other effects connected with channeling of relativistic particles in aligned crystal. The result of the simulation has been validated against data from channeling experiments carried out at SAGA LS.
Berben, Tom; Sorokin, Dimitry Y.; Ivanova, Natalia; ...
2015-10-26
Thioalkalivibrio thiocyanoxidans strain ARh 2 T is a sulfur-oxidizing bacterium isolated from haloalkaline soda lakes. It is a motile, Gram-negative member of the Gammaproteobacteria. Remarkable properties include the ability to grow on thiocyanate as the sole energy, sulfur and nitrogen source, and the capability of growth at salinities of up to 4.3 M total Na +. This draft genome sequence consists of 61 scaffolds comprising 2,765,337 bp, and contains 2616 protein-coding and 61 RNA-coding genes. In conclusion, this organism was sequenced as part of the Community Science Program of the DOE Joint Genome Institute.
Berben, Tom; Sorokin, Dimitry Y.; Ivanova, Natalia; ...
2015-11-19
Thioalkalivibrio paradoxus strain ARh 1 T is a chemolithoautotrophic, non-motile, Gram-negative bacterium belonging to the Gammaproteobacteria that was isolated from samples of haloalkaline soda lakes. It derives energy from the oxidation of reduced sulfur compounds and is notable for its ability to grow on thiocyanate as its sole source of electrons, sulfur and nitrogen. The full genome consists of 3,756,729 bp and comprises 3,500 protein-coding and 57 RNA-coding genes. Moreover, this organism was sequenced as part of the community science program at the DOE Joint Genome Institute.
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
MANTA--an open-source, high density electrophysiology recording suite for MATLAB.
Englitz, B; David, S V; Sorenson, M D; Shamma, S A
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.
MANTA—an open-source, high density electrophysiology recording suite for MATLAB
Englitz, B.; David, S. V.; Sorenson, M. D.; Shamma, S. A.
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point. PMID:23653593
Constrained coding for the deep-space optical channel
NASA Technical Reports Server (NTRS)
Moision, B. E.; Hamkins, J.
2002-01-01
We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.
1989-08-04
undersigned, representing /-- /Wz agree that as part of the joint Marketing Agreement between Rolm Mil-Spec and Data General for the Ada Development...the assembly (e wkhout the pragma. Source Code in the folowing emaple, prama NLINE applie to all tb calls to SQUARE in WrrH INLINF. procedure WITH
OpenPET: A Flexible Electronics System for Radiotracer Imaging
NASA Astrophysics Data System (ADS)
Moses, W. W.; Buckley, S.; Vu, C.; Peng, Q.; Pavlov, N.; Choong, W.-S.; Wu, J.; Jackson, C.
2010-10-01
We present the design for OpenPET, an electronics readout system designed for prototype radiotracer imaging instruments. The critical requirements are that it has sufficient performance, channel count, channel density, and power consumption to service a complete camera, and yet be simple, flexible, and customizable enough to be used with almost any detector or camera design. An important feature of this system is that each analog input is processed independently. Each input can be configured to accept signals of either polarity as well as either differential or ground referenced signals. Each signal is digitized by a continuously sampled ADC, which is processed by an FPGA to extract pulse height information. A leading edge discriminator creates a timing edge that is “time stamped” by a TDC implemented inside the FPGA. This digital information from each channel is sent to an FPGA that services 16 analog channels, and information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc. As all of this processing is controlled by firmware and software, it can be modified/customized easily. The system is open source, meaning that all technical data (specifications, schematics and board layout files, source code, and instructions) will be publicly available.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet
1988-01-01
This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.
Opportunistic quantum network coding based on quantum teleportation
NASA Astrophysics Data System (ADS)
Shang, Tao; Du, Gang; Liu, Jian-wei
2016-04-01
It seems impossible to endow opportunistic characteristic to quantum network on the basis that quantum channel cannot be overheard without disturbance. In this paper, we propose an opportunistic quantum network coding scheme by taking full advantage of channel characteristic of quantum teleportation. Concretely, it utilizes quantum channel for secure transmission of quantum states and can detect eavesdroppers by means of quantum channel verification. What is more, it utilizes classical channel for both opportunistic listening to neighbor states and opportunistic coding by broadcasting measurement outcome. Analysis results show that our scheme can reduce the times of transmissions over classical channels for relay nodes and can effectively defend against classical passive attack and quantum active attack.
EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2004-01-01
EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.
High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual
NASA Technical Reports Server (NTRS)
Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.
2004-01-01
This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria N.; Salko, Robert K.
Coolant-Boiling in Rod Arrays|Two Fluids (COBRA-TF) is a thermal/ hydraulic (T/H) simulation code designed for light water reactor (LWR) vessel analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of 9 conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and had been used and modified by several institutions over the last few decades. COBRA-TF also found use at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG) and has been improved, updated, andmore » subsequently re-branded as CTF. As part of the improvement process, it was necessary to generate sufficient documentation for the open-source code which had lacked such material upon being adopted by RDFMG. This document serves mainly as a theory manual for CTF, detailing the many two-phase heat transfer, drag, and important accident scenario models contained in the code as well as the numerical solution process utilized. Coding of the models is also discussed, all with consideration for updates that have been made when transitioning from COBRA-TF to CTF. Further documentation outside of this manual is also available at RDFMG which focus on code input deck generation and source code global variable and module listings.« less
NASA Astrophysics Data System (ADS)
Gao, Qian
For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Modulation and coding for fast fading mobile satellite communication channels
NASA Technical Reports Server (NTRS)
Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.
1988-01-01
The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.
Modular, security enclosure and method of assembly
Linker, Kevin L.; Moyer, John W.
1995-01-01
A transportable, reusable rapidly assembled and disassembled, resizable modular, security enclosure utilizes a stepped panel construction. Each panel has an inner portion and an outer portion which form joints. A plurality of channels can be affixed to selected joints of the panels. Panels can be affixed to a base member and then affixed to one another by the use of elongated pins extending through the channel joints. Alternatively, the base member can be omitted and the panels themselves can be used as the floor of the enclosure. The pins will extend generally parallel to the joint in which they are located. These elongated pins are readily inserted into and removable from the channels in a predetermined sequence to allow assembly and disassembly of the enclosure. A door constructed from panels is used to close the opening to the enclosure.
The performance of trellis coded multilevel DPSK on a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1987-01-01
The performance of trellis coded multilevel differential phase-shift-keying (MDPSK) over Rician and Rayleigh fading channels is discussed. For operation at L-Band, this signalling technique leads to a more robust system than the coherent system with dual pilot tone calibration previously proposed for UHF. The results are obtained using a combination of analysis and simulation. The analysis shows that the design criterion for trellis codes to be operated on fading channels with interleaving/deinterleaving is no longer free Euclidean distance. The correct design criterion for optimizing bit error probability of trellis coded MDPSK over fading channels will be presented along with examples illustrating its application.
Banaszek, Konrad; Dragan, Andrzej; Wasilewski, Wojciech; Radzewicz, Czesław
2004-06-25
We present an experiment demonstrating the entanglement enhanced capacity of a quantum channel with correlated noise, modeled by a fiber optic link exhibiting fluctuating birefringence. In this setting, introducing entanglement between two photons is required to maximize the amount of information that can be encoded into their joint polarization degree of freedom. We demonstrated this effect using a fiber-coupled source of entangled photon pairs based on spontaneous parametric down-conversion, and a linear-optics Bell state measurement. The obtained experimental classical capacity with entangled states is equal to 0.82+/-0.04 per a photon pair, and it exceeds approximately 2.5 times the theoretical upper limit when no quantum correlations are allowed.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
Analysis and Calculation of the Fluid Flow and the Temperature Field by Finite Element Modeling
NASA Astrophysics Data System (ADS)
Dhamodaran, M.; Jegadeesan, S.; Kumar, R. Praveen
2018-04-01
This paper presents a fundamental and accurate approach to study numerical analysis of fluid flow and heat transfer inside a channel. In this study, the Finite Element Method is used to analyze the channel, which is divided into small subsections. The small subsections are discretized using higher number of domain elements and the corresponding number of nodes. MATLAB codes are developed to be used in the analysis. Simulation results showed that the analyses of fluid flow and temperature are influenced significantly by the changing entrance velocity. Also, there is an apparent effect on the temperature fields due to the presence of an energy source in the middle of the domain. In this paper, the characteristics of flow analysis and heat analysis in a channel have been investigated.
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
NASA Astrophysics Data System (ADS)
Fusco, Tilde; Petrella, Angelo; Tanda, Mario
2009-12-01
The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
NASA Astrophysics Data System (ADS)
Hashizume, H.; Ito, S.; Yanagi, N.; Tamura, H.; Sagara, A.
2018-02-01
Segment fabrication is now a candidate for the design of superconducting helical magnets in the helical fusion reactor FFHR-d1, which adopts the joint winding of high-temperature superconducting (HTS) helical coils as a primary option and the ‘remountable’ HTS helical coil as an advanced option. This paper reports on recent progress in two key technologies: the mechanical joints (remountable joints) of the HTS conductors and the metal porous media inserted into the cooling channel for segment fabrication. Through our research activities it has been revealed that heat treatment during fabrication of the joint can reduce joint resistance and its dispersion, which can shorten the fabrication process and be applied to bent conductor joints. Also, heat transfer correlations of the cooling channel were established to evaluate heat transfer performance with various cryogenic coolants based on the correlations to analyze the thermal stability of the joint.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Liu, Yang; Han, Guangjie; Shi, Sulong; Li, Zhengquan
2018-06-20
This study investigates the superiority of cooperative broadcast transmission over traditional orthogonal schemes when applied in a downlink relaying broadcast channel (RBC). Two proposed cooperative broadcast transmission protocols, one with an amplify-and-forward (AF) relay, and the other with a repetition-based decode-and-forward (DF) relay, are investigated. By utilizing superposition coding (SupC), the source and the relay transmit the private user messages simultaneously instead of sequentially as in traditional orthogonal schemes, which means the channel resources are reused and an increased channel degree of freedom is available to each user, hence the half-duplex penalty of relaying is alleviated. To facilitate a performance evaluation, theoretical outage probability expressions of the two broadcast transmission schemes are developed, based on which, we investigate the minimum total power consumption of each scheme for a given traffic requirement by numerical simulation. The results provide details on the overall system performance and fruitful insights on the essential characteristics of cooperative broadcast transmission in RBCs. It is observed that better overall outage performances and considerable power gains can be obtained by utilizing cooperative broadcast transmissions compared to traditional orthogonal schemes.
NASA Astrophysics Data System (ADS)
Zeng, Hai-Rong; Song, Hui-Zhen
1999-05-01
Based on three-dimensional joint finite element, this paper discusses the theory and methodology about inversion of geodetic data. The FEM and inversion formula is given in detail; also a related code is developed. By use of the Green’s function about 3-D FEM, we invert geodetic measurements of coseismic deformation of the 1989 M S=7.1 Loma Prieta earthquake to determine its source mechanism. The result indicates that the slip on the fault plane is very heterogeneous. The maximum slip and shear stress are located about 10 km to northwest of the earthquake source; the stress drop is about more than 1 MPa.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
1987-03-01
38 7. STEP 4 - CURRENT VERSION ..................................... 40 8 . STEP 4 - PROTOTYPE...1- 4 respectively. Tables 2, 4 , 6, and 8 are the respective prototype versions of source code. There are several noticeable differences between the...prompt in the scroll area (to make an input). This is distracting and time consuming. 42 IL a- TABLE 8 STEP 4 - PROTOTYPE Ge tNextEvent MouseClick
Coherent state coding approaches the capacity of non-Gaussian bosonic channels
NASA Astrophysics Data System (ADS)
Huber, Stefan; König, Robert
2018-05-01
The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, DoJun; Lin, Shu
1997-01-01
In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.
NASA Astrophysics Data System (ADS)
Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng
2018-01-01
In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.
ERIC Educational Resources Information Center
Simpson, Timothy J.
Paivio's Dual Coding Theory has received widespread recognition for its connection between visual and aural channels of internal information processing. The use of only two channels, however, cannot satisfactorily explain the effects witnessed every day. This paper presents a study suggesting the presence a third, kinesthetic channel, currently…
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Courturier, Servanne; Levy, Yannick; Mills, Diane G.; Perez, Lance C.; Wang, Fu-Quan
1993-01-01
In his seminal 1948 paper 'The Mathematical Theory of Communication,' Claude E. Shannon derived the 'channel coding theorem' which has an explicit upper bound, called the channel capacity, on the rate at which 'information' could be transmitted reliably on a given communication channel. Shannon's result was an existence theorem and did not give specific codes to achieve the bound. Some skeptics have claimed that the dramatic performance improvements predicted by Shannon are not achievable in practice. The advances made in the area of coded modulation in the past decade have made communications engineers optimistic about the possibility of achieving or at least coming close to channel capacity. Here we consider the possibility in the light of current research results.
Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)
NASA Astrophysics Data System (ADS)
Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai
2016-04-01
We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013
Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Rao, Xiongbin; Lau, Vincent K. N.
2014-06-01
To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.
Coherent-state constellations and polar codes for thermal Gaussian channels
NASA Astrophysics Data System (ADS)
Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.
2017-06-01
Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.
NASA Astrophysics Data System (ADS)
Zhang, H.; Thurber, C. H.; Maceira, M.; Roux, P.
2013-12-01
The crust around the San Andreas Fault Observatory at depth (SAFOD) has been the subject of many geophysical studies aimed at characterizing in detail the fault zone structure and elucidating the lithologies and physical properties of the surrounding rocks. Seismic methods in particular have revealed the complex two-dimensional (2D) and three-dimensional (3D) structure of the crustal volume around SAFOD and the strong velocity reduction in the fault damage zone. In this study we conduct a joint inversion using body-wave arrival times and surface-wave dispersion data to image the P-and S-wave velocity structure of the upper crust surrounding SAFOD. The two data types have complementary strengths - the body-wave data have good resolution at depth, albeit only where there are crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution and are not dependent on the earthquake source distribution because they are derived from ambient noise. The body-wave data are from local earthquakes and explosions, comprising the dataset analyzed by Zhang et al. (2009). The surface-wave data are for Love waves from ambient noise correlations, and are from Roux et al. (2011). The joint inversion code is based on the regional-scale version of the double-difference (DD) tomography algorithm tomoDD. The surface-wave inversion code that is integrated into the joint inversion algorithm is from Maceira and Ammon (2009). The propagator matrix solver in the algorithm DISPER80 (Saito, 1988) is used for the forward calculation of dispersion curves from layered velocity models. We examined how the structural models vary as we vary the relative weighting of the fit to the two data sets and in comparison to the previous separate inversion results. The joint inversion with the 'optimal' weighting shows more clearly the U-shaped local structure from the Buzzard Canyon Fault on the west side of SAF to the Gold Hill Fault on the east side.
A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs
Liu, Wan-Yu; Chou, Chun-Hung
2014-01-01
This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990
2008-12-01
The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
Joint Chiefs of Staff > Directorates > J3 | Operations
Joint Staff Structure Joint Staff Inspector General Origin of Joint Concepts U.S. Code | Joint Chiefs of J8 | Force Structure, Resources & Assessment Contact J3 Operations Home : Directorates : J3
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-01-01
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor
Hu, Po; Wilson, Paul
2014-01-01
The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in themore » code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.« less
A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)
NASA Technical Reports Server (NTRS)
Kelly, J. J.; Abu-Khajeel, H.
1997-01-01
This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Characterization of the graphite pile as a source of thermal neutrons
NASA Astrophysics Data System (ADS)
Vykydal, Zdenek; Králík, Miloslav; Jančář, Aleš; Kopecký, Zdeněk; Dressler, Jan; Veškrna, Martin
2015-11-01
A new graphite pile designed to serve as a standard source of thermal neutrons has been built at the Czech Metrology Institute. Actual dimensions of the pile are 1.95 m (W)×1.95 m (L)×2.0 m (H). At its center, there is a measurement channel whose dimensions are 0.4 m×0.4 m×1.25 m (depth). The channel is equipped with a calibration bench, which allows reproducible placement of the tested/calibrated device. At a distance of 80 cm from the channel axis, six holes are symmetrically located allowing the placement of radionuclide neutron sources of Pu-Be and/or Am-Be type. Spatial distribution of thermal neutron fluence in the cavity was calculated in detail with the MCNP neutron transport code. Experimentally, it was measured with two active detectors: a small 3He proportional detector by the French company LMT, type 0.5 NH 1/1 KF, and a silicon pixel detector Timepix with 10B converter foil. The relative values of thermal neutron fluence rate obtained with active detectors were converted to absolute ones using thermal neutron fluence rates measured by means of gold foil activation. The quality of thermal neutron field was characterized by the cadmium ratio.
Measuring joint cartilage thickness using reflectance spectroscopy non-invasively and in real-time
NASA Astrophysics Data System (ADS)
Canpolat, Murat; Denkceken, Tuba; Karagol, Cosar; Aydin, Ahmet T.
2011-03-01
Joint cartilage thickness has been estimated using spatially resolved steady-state reflectance spectroscopy noninvasively and in-real time. The system consists of a miniature UV-VIS spectrometer, a halogen tungsten light source, and an optical fiber probe with six 400 um diameter fibers. The first fiber was used to deliver the light to the cartilage and the other five were used to detect back-reflected diffused light. Distances from the detector fibers to the source fiber were 0.8 mm, 1.6 mm, 2.4 mm, 3.2 mm and 4 mm. Spectra of back-reflected diffused light were taken on 40 bovine patella cartilages. The samples were grouped into four; the first group was the control group with undamaged cartilages, in the 2nd, 3rd and 4th groups cartilage thickness was reduced approximately 25%, 50% and 100%, respectively. A correlation between cartilage thicknesses and hemoglobin absorption of light in the wavelength range of 500 nm- 600 nm for source-detector pairs was found. The proposed system with an optical fiber probe less than 4 mm in diameter has the potential for cartilage thickness assessment through an arthroscopy channel in real-time without damaging the cartilage.
Coherent UDWDM PON with joint subcarrier reception at OLT.
Kottke, Christoph; Fischer, Johannes Karl; Elschner, Robert; Frey, Felix; Hilt, Jonas; Schubert, Colja; Schmidt, Daniel; Wu, Zifeng; Lankl, Berthold
2014-07-14
In this contribution, we report on the experimental investigation of an ultra-dense wavelength-division multiplexing (UDWDM) upstream link with up to 700 × 2.488 Gb/s polarization-division multiplexing differential quadrature phase-shift keying parallel upstream user channels transmitted over 80 km of standard single-mode fiber. We discuss challenges of the digital signal processing in the optical line terminal arising from the joint reception of several upstream user channels. We present solutions for resource and cost-efficient realization of the required channel separation, matched filtering, down-conversion and decimation as well as realization of the clock recovery and polarization demultiplexing for each individual channel.
NASA Astrophysics Data System (ADS)
Chen, Xin; Zhao, Jianyi; Zhou, Ning; Huang, Xiaodong; Cao, Mingde; Wang, Lei; Liu, Wen
2015-01-01
The monolithic integration of 1.5-μm four channels phase shift distributed feedback lasers array (DFB-LD array) with 4×1 multi-mode interference (MMI) optical combiner is demonstrated. A home developed process mainly consists of butt-joint regrowth (BJR) and simultaneous thermal and ultraviolet nanoimprint lithography (STU-NIL) is implemented to fabricate gratings and integrated devices. The threshold currents of the lasers are less than 10 mA and the side mode suppression ratios (SMSR) are better than 40 dB for all channels. Quasi-continuous tuning is realized over 7.5 nm wavelength region with the 30 °C temperature variation. The results indicate that the integration device we proposed can be used in wavelength division multiplexing passive optical networks (WDM-PON).
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols
2010-08-01
ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems
Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test
NASA Technical Reports Server (NTRS)
Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.
2007-01-01
Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.
NASA Astrophysics Data System (ADS)
Loevenbruck, Anne; Arpaia, Luca; Ata, Riadh; Gailler, Audrey; Hayashi, Yutaka; Hébert, Hélène; Heinrich, Philippe; Le Gal, Marine; Lemoine, Anne; Le Roy, Sylvestre; Marcer, Richard; Pedreros, Rodrigo; Pons, Kevin; Ricchiuto, Mario; Violeau, Damien
2017-04-01
This study is part of the joint actions carried out within TANDEM (Tsunamis in northern AtlaNtic: Definition of Effects by Modeling). This French project, mainly dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, was initiated after the catastrophic 2011 Tohoku-Oki tsunami. This event, which tragically struck Japan, drew the attention to the importance of tsunami risk assessment, in particular when nuclear facilities are involved. As a contribution to this challenging task, the TANDEM partners intend to provide guidance for the French Atlantic area based on numerical simulation. One of the identified objectives consists in designing, adapting and validating simulation codes for tsunami hazard assessment. Besides an integral benchmarking workpackage, the outstanding database of the 2011 event offers the TANDEM partners the opportunity to test their numerical tools with a real case. As a prerequisite, among the numerous published seismic source models arisen from the inversion of the various available records, a couple of coseismic slip distributions have been selected to provide common initial input parameters for the tsunami computations. After possible adaptations or specific developments, the different codes are employed to simulate the Tohoku-Oki tsunami from its source to the northeast Japanese coastline. The results are tested against the numerous tsunami measurements and, when relevant, comparisons of the different codes are carried out. First, the results related to the oceanic propagation phase are compared with the offshore records. Then, the modeled coastal impacts are tested against the onshore data. Flooding at a regional scale is considered, but high resolution simulations are also performed with some of the codes. They allow examining in detail the runup amplitudes and timing, as well as the complexity of the tsunami interaction with the coastal structures. The work is supported by the Tandem project in the frame of French PIA grant ANR-11-RSNR-00023.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
A preliminary study of muscular artifact cancellation in single-channel EEG.
Chen, Xun; Liu, Aiping; Peng, Hu; Ward, Rabab K
2014-10-01
Electroencephalogram (EEG) recordings are often contaminated with muscular artifacts that strongly obscure the EEG signals and complicates their analysis. For the conventional case, where the EEG recordings are obtained simultaneously over many EEG channels, there exists a considerable range of methods for removing muscular artifacts. In recent years, there has been an increasing trend to use EEG information in ambulatory healthcare and related physiological signal monitoring systems. For practical reasons, a single EEG channel system must be used in these situations. Unfortunately, there exist few studies for muscular artifact cancellation in single-channel EEG recordings. To address this issue, in this preliminary study, we propose a simple, yet effective, method to achieve the muscular artifact cancellation for the single-channel EEG case. This method is a combination of the ensemble empirical mode decomposition (EEMD) and the joint blind source separation (JBSS) techniques. We also conduct a study that compares and investigates all possible single-channel solutions and demonstrate the performance of these methods using numerical simulations and real-life applications. The proposed method is shown to significantly outperform all other methods. It can successfully remove muscular artifacts without altering the underlying EEG activity. It is thus a promising tool for use in ambulatory healthcare systems.
Joint Remote State Preparation Schemes for Two Different Quantum States Selectively
NASA Astrophysics Data System (ADS)
Shi, Jin
2018-05-01
The scheme for joint remote state preparation of two different one-qubit states according to requirement is proposed by using one four-dimensional spatial-mode-entangled KLM state as quantum channel. The scheme for joint remote state preparation of two different two-qubit states according to requirement is also proposed by using one four-dimensional spatial-mode-entangled KLM state and one three-dimensional spatial-mode-entangled GHZ state as quantum channels. Quantum non-demolition measurement, Hadamard gate operation, projective measurement and unitary transformation are included in the schemes.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Code-Time Diversity for Direct Sequence Spread Spectrum Systems
Hassan, A. Y.
2014-01-01
Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.
Energy efficient rateless codes for high speed data transfer over free space optical channels
NASA Astrophysics Data System (ADS)
Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.
2015-03-01
Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.
Li, Yun; Ho, K C; Popescu, Mihail
2014-03-01
Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Lin, Kai; Wang, Di; Hu, Long
2016-01-01
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302
Task representation in individual and joint settings
Prinz, Wolfgang
2015-01-01
This paper outlines a framework for task representation and discusses applications to interference tasks in individual and joint settings. The framework is derived from the Theory of Event Coding (TEC). This theory regards task sets as transient assemblies of event codes in which stimulus and response codes interact and shape each other in particular ways. On the one hand, stimulus and response codes compete with each other within their respective subsets (horizontal interactions). On the other hand, stimulus and response code cooperate with each other (vertical interactions). Code interactions instantiating competition and cooperation apply to two time scales: on-line performance (i.e., doing the task) and off-line implementation (i.e., setting the task). Interference arises when stimulus and response codes overlap in features that are irrelevant for stimulus identification, but relevant for response selection. To resolve this dilemma, the feature profiles of event codes may become restructured in various ways. The framework is applied to three kinds of interference paradigms. Special emphasis is given to joint settings where tasks are shared between two participants. Major conclusions derived from these applications include: (1) Response competition is the chief driver of interference. Likewise, different modes of response competition give rise to different patterns of interference; (2) The type of features in which stimulus and response codes overlap is also a crucial factor. Different types of such features give likewise rise to different patterns of interference; and (3) Task sets for joint settings conflate intraindividual conflicts between responses (what), with interindividual conflicts between responding agents (whom). Features of response codes may, therefore, not only address responses, but also responding agents (both physically and socially). PMID:26029085
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
ERIC Educational Resources Information Center
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
NASA Astrophysics Data System (ADS)
Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.
2015-08-01
The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.
Kurt, Simone; Sausbier, Matthias; Rüttiger, Lukas; Brandt, Niels; Moeller, Christoph K.; Kindler, Jennifer; Sausbier, Ulrike; Zimmermann, Ulrike; van Straaten, Harald; Neuhuber, Winfried; Engel, Jutta; Knipper, Marlies; Ruth, Peter; Schulze, Holger
2012-01-01
Large conductance, voltage- and Ca2+-activated K+ (BK) channels in inner hair cells (IHCs) of the cochlea are essential for hearing. However, germline deletion of BKα, the pore-forming subunit KCNMA1 of the BK channel, surprisingly did not affect hearing thresholds in the first postnatal weeks, even though altered IHC membrane time constants, decreased IHC receptor potential alternating current/direct current ratio, and impaired spike timing of auditory fibers were reported in these mice. To investigate the role of IHC BK channels for central auditory processing, we generated a conditional mouse model with hair cell-specific deletion of BKα from postnatal day 10 onward. This had an unexpected effect on temporal coding in the central auditory system: neuronal single and multiunit responses in the inferior colliculus showed higher excitability and greater precision of temporal coding that may be linked to the improved discrimination of temporally modulated sounds observed in behavioral training. The higher precision of temporal coding, however, was restricted to slower modulations of sound and reduced stimulus-driven activity. This suggests a diminished dynamic range of stimulus coding that is expected to impair signal detection in noise. Thus, BK channels in IHCs are crucial for central coding of the temporal fine structure of sound and for detection of signals in a noisy environment.—Kurt, S., Sausbier, M., Rüttiger, L., Brandt, N., Moeller, C. K., Kindler, J., Sausbier, U., Zimmermann, U., van Straaten, H., Neuhuber, W., Engel, J., Knipper, M., Ruth, P., Schulze, H. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing. PMID:22691916
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
a Real-Time Computer Music Synthesis System
NASA Astrophysics Data System (ADS)
Lent, Keith Henry
A real time sound synthesis system has been developed at the Computer Music Center of The University of Texas at Austin. This system consists of several stand alone processors that were constructed jointly with White Instruments in Austin. These processors can be programmed as general purpose computers, but are provided with a number of specialized interfaces including: MIDI, 8 bit parallel, high speed serial, 2 channels analog input (18 bit A/Ds, 48kHz sample rate), and 4 channels analog output (18 bit D/As). In addition, a basic music synthesis language (Music56000) has been written in assembly code. On top of this, a symbolic compiler (PatchWork) has been developed to enable algorithms which run in these processors to be created graphically. And finally, a number of efficient time domain numerical models have been developed to enable the construction, simulation, control, and synthesis of many musical acoustics systems in real time on these processors. Specifically, assembly language models for cylindrical and conical horn sections, dissipative losses, tone holes, bells, and a number of linear and nonlinear boundary conditions have been developed.
Coherent communication with continuous quantum variables
NASA Astrophysics Data System (ADS)
Wilde, Mark M.; Krovi, Hari; Brun, Todd A.
2007-06-01
The coherent bit (cobit) channel is a resource intermediate between classical and quantum communication. It produces coherent versions of teleportation and superdense coding. We extend the cobit channel to continuous variables by providing a definition of the coherent nat (conat) channel. We construct several coherent protocols that use both a position-quadrature and a momentum-quadrature conat channel with finite squeezing. Finally, we show that the quality of squeezing diminishes through successive compositions of coherent teleportation and superdense coding.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
TOWARD THE DEVELOPMENT OF A CONSENSUS MATERIALS DATABASE FOR PRESSURE TECHNOLGY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, Robert W; Ren, Weiju
The ASME construction code books specify materials and fabrication procedures that are acceptable for pressure technology applications. However, with few exceptions, the materials properties provided in the ASME code books provide no statistics or other information pertaining to material variability. Such information is central to the prediction and prevention of failure events. Many sources of materials data exist that provide variability information but such sources do not necessarily represent a consensus of experts with respect to the reported trends that are represented. Such a need has been identified by the ASME Standards Technology, LLC and initial steps have been takenmore » to address these needs: however, these steps are limited to project-specific applications only, such as the joint DOE-ASME project on materials for Generation IV nuclear reactors. In contrast to light-water reactor technology, the experience base for the Generation IV nuclear reactors is somewhat lacking and heavy reliance must be placed on model development and predictive capability. The database for model development is being assembled and includes existing code alloys such as alloy 800H and 9Cr-1Mo-V steel. Ownership and use rights are potential barriers that must be addressed.« less
Multiple Trellis Coded Modulation (MTCM): An MSAT-X report
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1986-01-01
Conventional trellis coding outputs one channel symbol per trellis branch. The notion of multiple trellis coding is introduced wherein more than one channel symbol per trellis branch is transmitted. It is shown that the combination of multiple trellis coding with M-ary modulation yields a performance gain with symmetric signal set comparable to that previously achieved only with signal constellation asymmetry. The advantage of multiple trellis coding over the conventional trellis coded asymmetric modulation technique is that the potential for code catastrophe associated with the latter has been eliminated with no additional cost in complexity (as measured by the number of states in the trellis diagram).
BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.
Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong
2018-04-19
ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.
A video coding scheme based on joint spatiotemporal and adaptive prediction.
Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken
2009-05-01
We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jansen, S.D.
1981-09-01
The report was prepared as part of the Ohio River Basin Energy Study (ORBES), a multidisciplinary policy research program. The ORBES region consists of all of Kentucky, most of West Virginia, substantial parts of Illinois, Indiana, and Ohio, and southwestern Pennsylvania. The inventory lists installed electrical generating capacity in commercial service as of December 1, 1976, and scheduled capacity additions and removals between 1977 and 1986 in the six ORBES states (Illinois, Indiana, Kentucky, Ohio, Pennsylvania, and West Virginia). The following information is included for each electrical generating unit: unit ID code, company index, whether joint or industrial ownership, plantmore » name, whether inside or outside the ORBES region, FIPS county code, type of unit, size in megawatts, type of megawatt rating, status of unit, date of commercial operation (actual or scheduled), scheduled retirement date (if any), primary fuel, alternate fuel, type of cooling, source of cooling water, and source of information.« less
Image demosaicing: a systematic survey
NASA Astrophysics Data System (ADS)
Li, Xin; Gunturk, Bahadir; Zhang, Lei
2008-01-01
Image demosaicing is a problem of interpolating full-resolution color images from so-called color-filter-array (CFA) samples. Among various CFA patterns, Bayer pattern has been the most popular choice and demosaicing of Bayer pattern has attracted renewed interest in recent years partially due to the increased availability of source codes/executables in response to the principle of "reproducible research". In this article, we provide a systematic survey of over seventy published works in this field since 1999 (complementary to previous reviews 22, 67). Our review attempts to address important issues to demosaicing and identify fundamental differences among competing approaches. Our findings suggest most existing works belong to the class of sequential demosaicing - i.e., luminance channel is interpolated first and then chrominance channels are reconstructed based on recovered luminance information. We report our comparative study results with a collection of eleven competing algorithms whose source codes or executables are provided by the authors. Our comparison is performed on two data sets: Kodak PhotoCD (popular choice) and IMAX high-quality images (more challenging). While most existing demosaicing algorithms achieve good performance on the Kodak data set, their performance on the IMAX one (images with varying-hue and high-saturation edges) degrades significantly. Such observation suggests the importance of properly addressing the issue of mismatch between assumed model and observation data in demosaicing, which calls for further investigation on issues such as model validation, test data selection and performance evaluation.
Electric field-decoupled electroosmotic pump for microfluidic devices.
Liu, Shaorong; Pu, Qiaosheng; Lu, Joann J
2003-09-26
An electric field-free electroosmotic pump has been constructed and its pumping rate has been measured under various experimental conditions. The key component of the pump is an ion-exchange membrane grounding joint that serves two major functions: (i) to maintain fluid continuity between pump channels and microfluidic conduit and (ii) to ground the solution in the microfluidic channel at the joint through an external electrode, and hence to decouple the electric field applied to the pump channels from the rest of the microfluidic system. A theoretical model has been developed to calculate the pumping rates and its validity has been demonstrated.
Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted
2010-09-13
We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).
Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih
2016-04-21
Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.
NASA Astrophysics Data System (ADS)
Xi, Songnan; Zoltowski, Michael D.
2008-04-01
Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.
Chopper-stabilized phase detector
NASA Technical Reports Server (NTRS)
Hopkins, P. M.
1978-01-01
Phase-detector circuit for binary-tracking loops and other binary-data acquisition systems minimizes effects of drift, gain imbalance, and voltage offset in detector circuitry. Input signal passes simultaneously through two channels where it is mixed with early and late codes that are alternately switched between channels. Code switching is synchronized with polarity switching of detector output of each channel so that each channel uses each detector for half time. Net result is that dc offset errors are canceled, and effect of gain imbalance is simply change in sensitivity.
Warriors from the Sky: US Army Airborne Operational Art in Normandy
2017-05-25
capabilities required for conducting a cross- Channel joint forcible entry operation. This included the identification of specific missions for the airborne...cross- Channel joint forcible entry operation. This included the identification of specific missions for the airborne forces. As a result, the airborne...Operation Market Garden, Holland 1944 (HQ, 82 Airborne Division: Feb 1946), 4. Market Garden, following the invasion in Normandy, was the first
García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2014-01-01
A novel bit-detect-and-forward (BDF) relaying scheme based on repetition coding with the relay is proposed, significantly improving the robustness to impairments proper to free-space optical (FSO) communications such as unsuitable alignment between transmitter and receiver as well as fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence. Closed-form asymptotic bit-error-rate (BER) expressions are derived for a 3-way FSO communication setup. Fully exploiting the potential time-diversity available in the relay turbulent channel, a relevant better performance is achieved, showing a greater robustness to the relay location since a high diversity gain is provided regardless of the source-destination link distance. PMID:24587711
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
PEBL: A Code for Penetrating and Blunt Trauma, Based on the H-ICDA index
1978-10-01
separation 0 no separation g =I sacroiliac joint separation 0 no separation The injury is encoded as follows 808,1230000110. The root code 808. of the PEBL...acetabulum c = I lschium 0 not ischlum d w I ilium 0 not ilium e - I sacrum 0 not sacrum f I pubic separation 0 no separation g = I sacroiliac Joint ...numbers of combat casualties. Development of methodologies for making these estimates was requested of the Biophysics Branch by the Joint Technical
Building a Better Trojan Horse: Emerging Army Roles in Joint Urban Operations
2001-01-01
Building a Better Trojan Horse : Emerging Army Roles in Joint Urban Operations A Monograph by MAJ Christopher H. Beckert Infantry, U.S. Army School...xx-xx-2000 to xx-xx-2000 5a. CONTRACT NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Building a Better Trojan Horse : Emerging Army Roles in Joint...TELEPHONE NUMBER International Area Code Area Code Telephone Number 703 767-9007 DSN 427-9007 2 Abstract BUILDING A BETTER TROJAN HORSE : EMERGING ARMY
NASA Astrophysics Data System (ADS)
Little, Duncan A.; Tennyson, Jonathan; Plummer, Martin; Noble, Clifford J.; Sunderland, Andrew G.
2017-06-01
TIMEDELN implements the time-delay method of determining resonance parameters from the characteristic Lorentzian form displayed by the largest eigenvalues of the time-delay matrix. TIMEDELN constructs the time-delay matrix from input K-matrices and analyses its eigenvalues. This new version implements multi-resonance fitting and may be run serially or as a high performance parallel code with three levels of parallelism. TIMEDELN takes K-matrices from a scattering calculation, either read from a file or calculated on a dynamically adjusted grid, and calculates the time-delay matrix. This is then diagonalized, with the largest eigenvalue representing the longest time-delay experienced by the scattering particle. A resonance shows up as a characteristic Lorentzian form in the time-delay: the programme searches the time-delay eigenvalues for maxima and traces resonances when they pass through different eigenvalues, separating overlapping resonances. It also performs the fitting of the calculated data to the Lorentzian form and outputs resonance positions and widths. Any remaining overlapping resonances can be fitted jointly. The branching ratios of decay into the open channels can also be found. The programme may be run serially or in parallel with three levels of parallelism. The parallel code modules are abstracted from the main physics code and can be used independently.
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
NASA Astrophysics Data System (ADS)
Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki
We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.
Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1975-01-01
The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.
Analysis of automatic repeat request methods for deep-space downlinks
NASA Technical Reports Server (NTRS)
Pollara, F.; Ekroot, L.
1995-01-01
Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Naderi, F.
1982-01-01
The nature of the optical/microwave interface aboard the relay satellite is considered. To allow for the maximum system flexibility, without overburdening either the optical or RF channel, demodulating the optical on board the relay satellite but leaving the optical channel decoding to be performed at the ground station is examined. The occurrence of erasures in the optical channel is treated. A hard decision on the erasure (i.e., the relay selecting a symbol at random in case of erasure occurrence) seriously degrades the performance of the overall system. Coding the erasure occurrences at the relay and transmitting this information via an extra bit to the ground station where it can be used by the decoder is suggested. Many examples with varying bit/photon energy efficiency and for the noisy and noiseless optical channel are considered. It is shown that coding the erasure occurrences dramatically improves the performance of the cascaded channel relative to the case of hard decision on the erasure by the relay.
Keys and seats: Spatial response coding underlying the joint spatial compatibility effect.
Dittrich, Kerstin; Dolk, Thomas; Rothe-Wulf, Annelie; Klauer, Karl Christoph; Prinz, Wolfgang
2013-11-01
Spatial compatibility effects (SCEs) are typically observed when participants have to execute spatially defined responses to nonspatial stimulus features (e.g., the color red or green) that randomly appear to the left and the right. Whereas a spatial correspondence of stimulus and response features facilitates response execution, a noncorrespondence impairs task performance. Interestingly, the SCE is drastically reduced when a single participant responds to one stimulus feature (e.g., green) by operating only one response key (individual go/no-go task), whereas a full-blown SCE is observed when the task is distributed between two participants (joint go/no-go task). This joint SCE (a.k.a. the social Simon effect) has previously been explained by action/task co-representation, whereas alternative accounts ascribe joint SCEs to spatial components inherent in joint go/no-go tasks that allow participants to code their responses spatially. Although increasing evidence supports the idea that spatial rather than social aspects are responsible for joint SCEs emerging, it is still unclear to which component(s) the spatial coding refers to: the spatial orientation of response keys, the spatial orientation of responding agents, or both. By varying the spatial orientation of the responding agents (Exp. 1) and of the response keys (Exp. 2), independent of the spatial orientation of the stimuli, in the present study we found joint SCEs only when both the seating and the response key alignment matched the stimulus alignment. These results provide evidence that spatial response coding refers not only to the response key arrangement, but also to the-often neglected-spatial orientation of the responding agents.
Coding for Efficient Image Transmission
NASA Technical Reports Server (NTRS)
Rice, R. F.; Lee, J. J.
1986-01-01
NASA publication second in series on data-coding techniques for noiseless channels. Techniques used even in noisy channels, provided data further processed with Reed-Solomon or other error-correcting code. Techniques discussed in context of transmission of monochrome imagery from Voyager II spacecraft but applicable to other streams of data. Objective of this type coding to "compress" data; that is, to transmit using as few bits as possible by omitting as much as possible of portion of information repeated in subsequent samples (or picture elements).
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
NASA Astrophysics Data System (ADS)
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations
NASA Astrophysics Data System (ADS)
Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET
2017-09-01
The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.
Joint diseases: from connexins to gap junctions.
Donahue, Henry J; Qu, Roy W; Genetos, Damian C
2017-12-19
Connexons form the basis of hemichannels and gap junctions. They are composed of six tetraspan proteins called connexins. Connexons can function as individual hemichannels, releasing cytosolic factors (such as ATP) into the pericellular environment. Alternatively, two hemichannel connexons from neighbouring cells can come together to form gap junctions, membrane-spanning channels that facilitate cell-cell communication by enabling signalling molecules of approximately 1 kDa to pass from one cell to an adjacent cell. Connexins are expressed in joint tissues including bone, cartilage, skeletal muscle and the synovium. Indicative of their importance as gap junction components, connexins are also known as gap junction proteins, but individual connexin proteins are gaining recognition for their channel-independent roles, which include scaffolding and signalling functions. Considerable evidence indicates that connexons contribute to the function of bone and muscle, but less is known about the function of connexons in other joint tissues. However, the implication that connexins and gap junctional channels might be involved in joint disease, including age-related bone loss, osteoarthritis and rheumatoid arthritis, emphasizes the need for further research into these areas and highlights the therapeutic potential of connexins.
Elasto-Plastic Analysis of Tee Joints Using HOT-SMAC
NASA Technical Reports Server (NTRS)
Arnold, Steve M. (Technical Monitor); Bednarcyk, Brett A.; Yarrington, Phillip W.
2004-01-01
The Higher Order Theory - Structural/Micro Analysis Code (HOT-SMAC) software package is applied to analyze the linearly elastic and elasto-plastic response of adhesively bonded tee joints. Joints of this type are finding an increasing number of applications with the increased use of composite materials within advanced aerospace vehicles, and improved tools for the design and analysis of these joints are needed. The linearly elastic results of the code are validated vs. finite element analysis results from the literature under different loading and boundary conditions, and new results are generated to investigate the inelastic behavior of the tee joint. The comparison with the finite element results indicates that HOT-SMAC is an efficient and accurate alternative to the finite element method and has a great deal of potential as an analysis tool for a wide range of bonded joints.
Matching relations for optimal entanglement concentration and purification
Kong, Fan-Zhen; Xia, Hui-Zhi; Yang, Ming; Yang, Qing; Cao, Zhuo-Liang
2016-01-01
The bilateral controlled NOT (CNOT) operation plays a key role in standard entanglement purification process, but the CNOT operation may not be the optimal joint operation in the sense that the output entanglement is maximized. In this paper, the CNOT operations in both the Schmidt-projection based entanglement concentration and the entanglement purification schemes are replaced with a general joint unitary operation, and the optimal matching relations between the entangling power of the joint unitary operation and the non-maximal entangled channel are found for optimizing the entanglement in- crement or the output entanglement. The result is somewhat counter-intuitive for entanglement concentration. The output entanglement is maximized when the entangling power of the joint unitary operation and the quantum channel satisfy certain relation. There exist a variety of joint operations with non-maximal entangling power that can induce a maximal output entanglement, which will greatly broaden the set of the potential joint operations in entanglement concentration. In addition, the entanglement increment in purification process is maximized only by the joint unitary operations (including CNOT) with maximal entangling power. PMID:27189800
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.
2016-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).
NASA Astrophysics Data System (ADS)
Lv, Shu-Xin; Zhao, Zheng-Wei; Zhou, Ping
2018-01-01
We present a scheme for joint remote implementation of an arbitrary single-qubit operation following some ideas in one-way quantum computation. All the senders share the information of implemented quantum operation and perform corresponding single-qubit measurements according to their information of implemented operation. An arbitrary single-qubit operation can be implemented upon the remote receiver's quantum system if the receiver cooperates with all the senders. Moreover, we study the protocol of multiparty joint remote implementation of an arbitrary single-qubit operation with many senders by using a multiparticle entangled state as the quantum channel.
2010-03-01
proposed scheme for power and code allocation for the secondary user is outlined in Fig. 2. V. SIMULATION STUDIES We consider a primary DS - CDMA system...DATES COVERED (From - To) January 2008 – June 2009 4. TITLE AND SUBTITLE COGNITIVE CDMA CHANNELIZATION 5a. CONTRACT NUMBER In-House 5b. GRANT...TELEPHONE NUMBER (Include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Cognitive CDMA Channelization Kanke
MrLavaLoba: A new probabilistic model for the simulation of lava flows as a settling process
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, Mattia; Tarquini, Simone
2018-01-01
A new code to simulate lava flow spread, MrLavaLoba, is presented. In the code, erupted lava is itemized in parcels having an elliptical shape and prescribed volume. New parcels bud from existing ones according to a probabilistic law influenced by the local steepest slope direction and by tunable input settings. MrLavaLoba must be accounted among the probabilistic codes for the simulation of lava flows, because it is not intended to mimic the actual process of flowing or to provide directly the progression with time of the flow field, but rather to guess the most probable inundated area and final thickness of the lava deposit. The code's flexibility allows it to produce variable lava flow spread and emplacement according to different dynamics (e.g. pahoehoe or channelized-'a'ā). For a given scenario, it is shown that model outputs converge, in probabilistic terms, towards a single solution. The code is applied to real cases in Hawaii and Mt. Etna, and the obtained maps are shown. The model is written in Python and the source code is available at http://demichie.github.io/MrLavaLoba/.
Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.
Klempova, Bibiana; Liepelt, Roman
2017-07-08
Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.
Neutronic calculation of fast reactors by the EUCLID/V1 integrated code
NASA Astrophysics Data System (ADS)
Koltashev, D. A.; Stakhanova, A. A.
2017-01-01
This article considers neutronic calculation of a fast-neutron lead-cooled reactor BREST-OD-300 by the EUCLID/V1 integrated code. The main goal of development and application of integrated codes is a nuclear power plant safety justification. EUCLID/V1 is integrated code designed for coupled neutronics, thermomechanical and thermohydraulic fast reactor calculations under normal and abnormal operating conditions. EUCLID/V1 code is being developed in the Nuclear Safety Institute of the Russian Academy of Sciences. The integrated code has a modular structure and consists of three main modules: thermohydraulic module HYDRA-IBRAE/LM/V1, thermomechanical module BERKUT and neutronic module DN3D. In addition, the integrated code includes databases with fuel, coolant and structural materials properties. Neutronic module DN3D provides full-scale simulation of neutronic processes in fast reactors. Heat sources distribution, control rods movement, reactivity level changes and other processes can be simulated. Neutron transport equation in multigroup diffusion approximation is solved. This paper contains some calculations implemented as a part of EUCLID/V1 code validation. A fast-neutron lead-cooled reactor BREST-OD-300 transient simulation (fuel assembly floating, decompression of passive feedback system channel) and cross-validation with MCU-FR code results are presented in this paper. The calculations demonstrate EUCLID/V1 code application for BREST-OD-300 simulating and safety justification.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, G.T.; Craven, R.E.
X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.
Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen
2018-05-25
Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.
Sümeyra Tosun: Psi Chi/APA Edwin B. Newman Graduate Research Award.
2014-11-01
The Edwin B. Newman Graduate Research Award is given jointly by Psi Chi and APA. The award was established to recognize young researchers at the beginning of their professional lives and to commemorate both the 50th anniversary of Psi Chi and the 100th anniversary of psychology as a science (dating from the founding of Wundt's laboratory). The 2014 recipient is Sümeyra Tosun. Tosun was chosen for "an outstanding research paper that examines the cognitive repercussions of obligatory versus optional marking of evidentiality, the linguistic coding of the source of information. In English, evidentiality is conveyed in the lexicon through the use of adverbs. In Turkish, evidentiality is coded in the grammar. In two experiments, it was found that English speakers were equally good at remembering and monitoring the source of firsthand information and the source of non-firsthand information. Turkish speakers were worse at remembering and monitoring non-firsthand information than firsthand information and were worse than English speakers at remembering and monitoring non-firsthand information." Tosun's award citation, biography, and a selected bibliography are presented here. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay Receiver Design Issues
2011-03-01
222 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 9, SEPTEMBER 2011 2595 Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay... noncoherent reception, channel estima- tion. I. INTRODUCTION IN the two-way relay channel (TWRC), a pair of sourceterminals exchange information...2011 4. TITLE AND SUBTITLE Noncoherent Physical-Layer Network Coding with FSK Modulation:Relay Receiver Design Issues 5a. CONTRACT NUMBER 5b
Rotary Joints With Electrical Connections
NASA Technical Reports Server (NTRS)
Osborn, F. W.
1986-01-01
Power and data transmitted on many channels. Two different rotary joints equipped with electrical connections between rotating and stationary parts. One joint transmits axial thrust and serves as interface between spinning and nonspinning parts of Galileo spacecraft. Other is scanning (limitedrotation) joint that aims scientific instruments from nonspinning part. Selected features of both useful to designers of robots, advanced production equipment, and remotely controlled instruments.
Fusimotor control of spindle sensitivity regulates central and peripheral coding of joint angles.
Lan, Ning; He, Xin
2012-01-01
Proprioceptive afferents from muscle spindles encode information about peripheral joint movements for the central nervous system (CNS). The sensitivity of muscle spindle is nonlinearly dependent on the activation of gamma (γ) motoneurons in the spinal cord that receives inputs from the motor cortex. How fusimotor control of spindle sensitivity affects proprioceptive coding of joint position is not clear. Furthermore, what information is carried in the fusimotor signal from the motor cortex to the muscle spindle is largely unknown. In this study, we addressed the issue of communication between the central and peripheral sensorimotor systems using a computational approach based on the virtual arm (VA) model. In simulation experiments within the operational range of joint movements, the gamma static commands (γ(s)) to the spindles of both mono-articular and bi-articular muscles were hypothesized (1) to remain constant, (2) to be modulated with joint angles linearly, and (3) to be modulated with joint angles nonlinearly. Simulation results revealed a nonlinear landscape of Ia afferent with respect to both γ(s) activation and joint angle. Among the three hypotheses, the constant and linear strategies did not yield Ia responses that matched the experimental data, and therefore, were rejected as plausible strategies of spindle sensitivity control. However, if γ(s) commands were quadratically modulated with joint angles, a robust linear relation between Ia afferents and joint angles could be obtained in both mono-articular and bi-articular muscles. With the quadratic strategy of spindle sensitivity control, γ(s) commands may serve as the CNS outputs that inform the periphery of central coding of joint angles. The results suggest that the information of joint angles may be communicated between the CNS and muscles via the descending γ(s) efferent and Ia afferent signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Scott Carlton; Roberts, Jesse D.
2014-03-01
This document describes the marine hydrokinetic (MHK) input file and subroutines for the Sandia National Laboratories Environmental Fluid Dynamics Code (SNL-EFDC), which is a combined hydrodynamic, sediment transport, and water quality model based on the Environmental Fluid Dynamics Code (EFDC) developed by John Hamrick [1], formerly sponsored by the U.S. Environmental Protection Agency, and now maintained by Tetra Tech, Inc. SNL-EFDC has been previously enhanced with the incorporation of the SEDZLJ sediment dynamics model developed by Ziegler, Lick, and Jones [2-4]. SNL-EFDC has also been upgraded to more accurately simulate algae growth with specific application to optimizing biomass in anmore » open-channel raceway for biofuels production [5]. A detailed description of the input file containing data describing the MHK device/array is provided, along with a description of the MHK FORTRAN routine. Both a theoretical description of the MHK dynamics as incorporated into SNL-EFDC and an explanation of the source code are provided. This user manual is meant to be used in conjunction with the original EFDC [6] and sediment dynamics SNL-EFDC manuals [7]. Through this document, the authors provide information for users who wish to model the effects of an MHK device (or array of devices) on a flow system with EFDC and who also seek a clear understanding of the source code, which is available from staff in the Water Power Technologies Department at Sandia National Laboratories, Albuquerque, New Mexico.« less
Cracking Taste Codes by Tapping into Sensory Neuron Impulse Traffic
Frank, Marion E.; Lundy, Robert F.; Contreras, Robert J.
2008-01-01
Insights into the biological basis for mammalian taste quality coding began with electrophysiological recordings from “taste” nerves and this technique continues to produce essential information today. Chorda tympani (geniculate ganglion) neurons, which are particularly involved in taste quality discrimination, are specialists or generalists. Specialists respond to stimuli characterized by a single taste quality as defined by behavioral cross-generalization in conditioned taste tests. Generalists respond to electrolytes that elicit multiple aversive qualities. Na+-salt (N) specialists in rodents and sweet-stimulus (S) specialists in multiple orders of mammals are well-characterized. Specialists are associated with species’ nutritional needs and their activation is known to be malleable by internal physiological conditions and contaminated external caloric sources. S specialists, associated with the heterodimeric G-protein coupled receptor: T1R, and N specialists, associated with the epithelial sodium channel: ENaC, are consistent with labeled line coding from taste bud to afferent neuron. Yet, S-specialist neurons and behavior are less specific thanT1R2-3 in encompassing glutamate and E generalist neurons are much less specific than a candidate, PDK TRP channel, sour receptor in encompassing salts and bitter stimuli. Specialist labeled lines for nutrients and generalist patterns for aversive electrolytes may be transmitting taste information to the brain side by side. However, specific roles of generalists in taste quality coding may be resolved by selecting stimuli and stimulus levels found in natural situations. T2Rs, participating in reflexes via the glossopharynygeal nerve, became highly diversified in mammalian phylogenesis as they evolved to deal with dangerous substances within specific environmental niches. Establishing the information afferent neurons traffic to the brain about natural taste stimuli imbedded in dynamic complex mixtures will ultimately “crack taste codes.” PMID:18824076
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
Performance of DBS-Radio using concatenated coding and equalization
NASA Technical Reports Server (NTRS)
Gevargiz, J.; Bell, D.; Truong, L.; Vaisnys, A.; Suwitra, K.; Henson, P.
1995-01-01
The Direct Broadcast Satellite-Radio (DBS-R) receiver is being developed for operation in a multipath Rayleigh channel. This receiver uses equalization and concatenated coding, in addition to open loop and closed loop architectures for carrier demodulation and symbol synchronization. Performance test results of this receiver are presented in both AWGN and multipath Rayleigh channels. Simulation results show that the performance of the receiver operating in a multipath Rayleigh channel is significantly improved by using equalization. These results show that fractional-symbol equalization offers a performance advantage over full symbol equalization. Also presented is the base-line performance of the DBS-R receiver using concatenated coding and interleaving.
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
Wall-resolved spectral cascade-transport turbulence model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Shaver, D. R.; Lahey, R. T.
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
Wall-resolved spectral cascade-transport turbulence model
Brown, C. S.; Shaver, D. R.; Lahey, R. T.; ...
2017-07-08
A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; Pitarka, A.; Vorobiev, O.; Glenn, L.; Antoun, T.
2017-12-01
We have performed three-dimensional high resolution simulations of underground chemical explosions conducted recently in jointed rock outcrop as part of the Source Physics Experiments (SPE) being conducted at the Nevada National Security Site (NNSS). The main goal of the current study is to investigate the effects of the structural and geomechanical properties on the spall phenomena due to underground chemical explosions and its subsequent effect on the seismo-acoustic signature at far distances. Two parametric studies have been undertaken to assess the impact of different 1) conceptual geological models including a single layer and two layers model, with and without joints and with and without varying geomechanical properties, and 2) depth of bursts of the chemical explosions and explosion yields. Through these investigations we have explored not only the near-field response of the chemical explosions but also the far-field responses of the seismic and the acoustic signatures. The near-field simulations were conducted using the Eulerian and Lagrangian codes, GEODYN and GEODYN -L, respectively, while the far-field seismic simulations were conducted using the elastic wave propagation code, WPP, and the acoustic response using the Kirchhoff-Helmholtz-Rayleigh time-dependent approximation code, KHR. Though a series of simulations we have recorded the velocity field histories a) at the ground surface on an acoustic-source-patch for the acoustic simulations, and 2) on a seismic-source-box for the seismic simulations. We first analyzed the SPE3 experimental data and simulated results, then simulated SPE4-prime, SPE5, and SPE6 to anticipate their seismo-acoustic responses given conditions of uncertainties. SPE experiments were conducted in a granitic formation; we have extended the parametric study to include other geological settings such dolomite and alluvial formations. These parametric studies enabled us 1) investigating the geotechnical and geophysical key parameters that impact the seismo-acoustic responses of underground chemical explosions and 2) deciphering and ranking through a global sensitivity analysis the most important key parameters to be characterized on site to minimize uncertainties in prediction and discrimination.
The helium star donor channel for the progenitors of Type Ia supernovae
NASA Astrophysics Data System (ADS)
Wang, B.; Meng, X.; Chen, X.; Han, Z.
2009-05-01
Type Ia supernovae (SNe Ia) play an important role in astrophysics, especially in the study of cosmic evolution. Several progenitor models for SNe Ia have been proposed in the past. In this paper we carry out a detailed study of the He star donor channel, in which a carbon-oxygen white dwarf (CO WD) accretes material from a He main-sequence star or a He subgiant to increase its mass to the Chandrasekhar mass. Employing Eggleton's stellar evolution code with an optically thick wind assumption, and adopting the prescription of Kato & Hachisu for the mass accumulation efficiency of the He-shell flashes on to the WDs, we performed binary evolution calculations for about 2600 close WD binary systems. According to these calculations, we mapped out the initial parameters for SNe Ia in the orbital period-secondary mass (logPi - Mi2) plane for various WD masses from this channel. The study shows that the He star donor channel is noteworthy for producing SNe Ia (~1.2 × 10-3yr-1 in our Galaxy), and that the progenitors from this channel may appear as supersoft X-ray sources. Importantly, this channel can explain SNe Ia with short delay times (<~108yr), which is consistent with the recent observational implications of young populations of SN Ia progenitors.
Information trade-offs for optical quantum communication.
Wilde, Mark M; Hayden, Patrick; Guha, Saikat
2012-04-06
Recent work has precisely characterized the achievable trade-offs between three key information processing tasks-classical communication (generation or consumption), quantum communication (generation or consumption), and shared entanglement (distribution or consumption), measured in bits, qubits, and ebits per channel use, respectively. Slices and corner points of this three-dimensional region reduce to well-known protocols for quantum channels. A trade-off coding technique can attain any point in the region and can outperform time sharing between the best-known protocols for accomplishing each information processing task by itself. Previously, the benefits of trade-off coding that had been found were too small to be of practical value (viz., for the dephasing and the universal cloning machine channels). In this Letter, we demonstrate that the associated performance gains are in fact remarkably high for several physically relevant bosonic channels that model free-space or fiber-optic links, thermal-noise channels, and amplifiers. We show that significant performance gains from trade-off coding also apply when trading photon-number resources between transmitting public and private classical information simultaneously over secret-key-assisted bosonic channels. © 2012 American Physical Society
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
On transform coding tools under development for VP10
NASA Astrophysics Data System (ADS)
Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao
2016-09-01
Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.
Schuelert, N; Zhang, C; Mogg, A J; Broad, L M; Hepburn, D L; Nisenbaum, E S; Johnson, M P; McDougall, J J
2010-11-01
The present study examined whether local administration of the cannabinoid-2 (CB(2)) receptor agonist GW405833 could modulate joint nociception in control rat knee joints and in an animal model of osteoarthritis (OA). OA was induced in male Wistar rats by intra-articular injection of sodium monoiodo-acetate with a recovery period of 14 days. Immunohistochemistry was used to evaluate the expression of CB(2) and transient receptor potential vanilloid channel-1 (TRPV1) receptors in the dorsal root ganglion (DRG) and synovial membrane of sham- and sodium mono-iodoacetate (MIA)-treated animals. Electrophysiological recordings were made from knee joint primary afferents in response to rotation of the joint both before and following close intra-arterial injection of different doses of GW405833. The effect of intra-articular GW405833 on joint pain perception was determined by hindlimb incapacitance. An in vitro neuronal release assay was used to see if GW405833 caused release of an inflammatory neuropeptide (calcitonin gene-related peptide - CGRP). CB(2) and TRPV1 receptors were co-localized in DRG neurons and synoviocytes in both sham- and MIA-treated animals. Local application of the GW405833 significantly reduced joint afferent firing rate by up to 31% in control knees. In OA knee joints, however, GW405833 had a pronounced sensitising effect on joint mechanoreceptors. Co-administration of GW405833 with the CB(2) receptor antagonist AM630 or pre-administration of the TRPV1 ion channel antagonist SB366791 attenuated the sensitising effect of GW405833. In the pain studies, intra-articular injection of GW405833 into OA knees augmented hindlimb incapacitance, but had no effect on pain behaviour in saline-injected control joints. GW405833 evoked increased CGRP release via a TRPV1 channel-dependent mechanism. These data indicate that GW405833 reduces the mechanosensitivity of afferent nerve fibres in control joints but causes nociceptive responses in OA joints. The observed pro-nociceptive effect of GW405833 appears to involve TRPV1 receptors. Copyright © 2010 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Wang, X; Xiao, H; Dai, X; Liu, X; Yu, X; Wu, J
2000-05-01
To study the joint neurotoxic effects of phoxim (Pho) and fenvalerate (Fen) on tetrodotoxin-sensitive (TTX-S) and tetrodotoxin-resistant (TTX-R) Na(+) currents in dorsal root ganglion (DRG) neurons of adult rat. Whole cell patch clamp technique was used to test the effects of Pho and Fen on TTX-S and TTX-R sodium currents in DRG neurons. The inactivation of TTX-R sodium channel was obviously slowed down by Fen. The tau(Na) of peak currents at doses of 10, 50 and 100 micromol/L Fen and control groups were (8.10 +/- 2.41) ms, (11.78 +/- 2.76) ms, P < 0.01, (8.76 +/-1.94) ms, P < 0.05 and (6.41 +/- 1.32) ms respectively. The inactivation of TTX-R sodium channel tail currents was also significantly delayed by Fen. The tau(Na) of the tail currents at doses of 10, 50, 100 micromol/L Fen and control groups were 6.11 +/- 0.52 (P < 0.05), 7.82 +/- 0.82 (P < 0.05), 7.23 +/- 1.09 (P < 0.05) and (4.91 +/- 0.97) ms separately. As compared with TTX-R sodium channel, the TTX-S sodium channel was less responsive to Fen exposure, which only led to slowly decay TTX-S sodium tail currents. There was no any effect of Pho on the TTX-S and TTX-R sodium channels. The mixed treatment of a Pho and Fen did not show joint effect on the sodium currents. Both the peak and tail currents are changed by Fen, however, Fen has more remarkable effects on TTX-R than on TTX-S sodium channel. The combined exposure to Pho and Fen shows no joint effect on the sodium channel.
Modeling activities on the negative-ion-based Neutral Beam Injectors of the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostinetti, P.; Antoni, V.; Chitarin, G.
2011-09-26
At the National Institute for Fusion Science (NIFS) large-scaled negative ion sources have been widely used for the Neutral Beam Injectors (NBIs) mounted on the Large Helical Device (LHD), which is the world-largest superconducting helical system. These injectors have achieved outstanding performances in terms of beam energy, negative-ion current and optics, and represent a reference for the development of heating and current drive NBIs for ITER.In the framework of the support activities for the ITER NBIs, the PRIMA test facility, which includes a RF-drive ion source with 100 keV accelerator (SPIDER) and a complete 1 MeV Neutral Beam system (MITICA)more » is under construction at Consorzio RFX in Padova.An experimental validation of the codes has been undertaken in order to prove the accuracy of the simulations and the soundness of the SPIDER and MITICA design. To this purpose, the whole set of codes have been applied to the LHD NBIs in a joint activity between Consorzio RFX and NIFS, with the goal of comparing and benchmarking the codes with the experimental data. A description of these modeling activities and a discussion of the main results obtained are reported in this paper.« less
Wang, Qin; Wang, Xiang-Bin
2014-01-01
We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000
Shim, Kyusung; Do, Nhu Tri; An, Beongku
2017-01-01
In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286
Self-calibrating threshold detector
NASA Technical Reports Server (NTRS)
Barnes, J. R.; Huang, M. Y. (Inventor)
1980-01-01
A self calibrating threshold detector comprises a single demodulating channel which includes a mixer having one input receiving the incoming signal and another input receiving a local replica code. During a short time interval, an incorrect local code is applied to the mixer to incorrectly demodulate the incoming signal and to provide a reference level that calibrates the noise propagating through the channel. A sample and hold circuit is coupled to the channel for storing a sample of the reference level. During a relatively long time interval, the correct replica code provides an output level which ranges between the reference level and a maximum level that represents incoming signal presence and synchronism with the replica code. A summer substracts the stored sample reference from the output level to provide a resultant difference signal indicative of the acquisition of the expected signal.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
NASA Astrophysics Data System (ADS)
Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.
2011-06-01
The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.
High-speed asynchronous data mulitiplexer/demultiplexer for high-density digital recorders
NASA Astrophysics Data System (ADS)
Berdugo, Albert; Small, Martin B.
1996-11-01
Modern High Density Digital Recorders are ideal devices for the storage of large amounts of digital and/or wideband analog data. Ruggedized versions of these recorders are currently available and are supporting many military and commercial flight test applications. However, in certain cases, the storage format becomes very critical, e.g., when a large number of data types are involved, or when channel- to-channel correlation is critical, or when the original data source must be accurately recreated during post mission analysis. A properly designed storage format will not only preserve data quality, but will yield the maximum storage capacity and record time for any given recorder family or data type. This paper describes a multiplex/demultiplex technique that formats multiple high speed data sources into a single, common format for recording. The method is compatible with many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of input data typically include PCM, wideband analog data, video, aircraft data buses, avionics, voice, time code, and many others. The described method preserves tight data correlation with minimal data overhead. The described technique supports full reconstruction of the original input signals during data playback. Output data correlation across channels is preserved for all types of data inputs. Simultaneous real- time data recording and reconstruction are also supported.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.
Improving the Teleportation Scheme of Three-Qubit State with a Four-Qubit Quantum Channel
NASA Astrophysics Data System (ADS)
Cai, Tao; Jiang, Min
2018-01-01
Recently, Zhao-Hui Wei et al. (Int. J. Theor. Phys. 55, 4687, 2016) proposed an improved quantum teleportation scheme for one three-qubit unknown state with a four-qubit quantum channel based on the original one proposed by Binayak S. Choudhury and Arpan Dhara (Int. J. Theor. Phys. 55, 3393, 2016). According to their schemes, the three-qubit entangled state could be teleported with one four-qubit cluster state and five-qubit joint measurements or four-qubit joint measurements. In this paper, we present an improved protocol only with single-qubit measurements and the same four-qubit quantum channel, lessening the difficulty and intensity of necessary operations.
Simulation of current-filament dynamics and relaxation in the Pegasus Spherical Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Bryan, J. B.; Sovinec, C. R.; Bird, T. M.
Nonlinear numerical computation is used to investigate the relaxation of non-axisymmetric current-channels from washer-gun plasma sources into 'tokamak-like' plasmas in the Pegasus toroidal experiment [Eidietis et al. J. Fusion Energy 26, 43 (2007)]. Resistive MHD simulations with the NIMROD code [Sovinec et al. Phys. Plasmas 10(5), 1727-1732 (2003)] utilize ohmic heating, temperature-dependent resistivity, and anisotropic, temperature-dependent thermal conduction corrected for regions of low magnetization to reproduce critical transport effects. Adjacent passes of the simulated current-channel attract and generate strong reversed current sheets that suggest magnetic reconnection. With sufficient injected current, adjacent passes merge periodically, releasing axisymmetric current rings from themore » driven channel. The current rings have not been previously observed in helicity injection for spherical tokamaks, and as such, provide a new phenomenological understanding for filament relaxation in Pegasus. After large-scale poloidal-field reversal, a hollow current profile and significant poloidal flux amplification accumulate over many reconnection cycles.« less
Trellis phase codes for power-bandwith efficient satellite communications
NASA Technical Reports Server (NTRS)
Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.
1981-01-01
Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.
Joint Experimentation on Scalable Parallel Processors (JESPP)
2006-04-01
made use of local embedded relational databases, implemented using sqlite on each node of an SPP to execute queries and return results via an ad hoc ...rl.af.mil 12a. DISTRIBUTION / AVAILABILITY STATEENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT...Experimentation Directorate (J9) required expansion of its joint semi-automated forces (JSAF) code capabilities; including number of entities, behavior complexity
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu
2016-10-01
A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.
Lu, Bangrong; He, Qinghua; He, Yonghong; Chen, Xuejing; Feng, Guangxia; Liu, Siyu; Ji, Yanhong
2018-09-18
To achieve the dual-channel (analog and digital) encoding, microbeads assembled with quantum dots (QDs) and element coding nanoparticles (ECNPs) have been prepared. Dual-spectra, including fluorescence generated from quantum dots (QDs) and laser induced breakdown spectrum obtained from the plasma of ECNPs, including AgO, MgO and ZnO nanoparticles, has been adopted to provide more encoding amounts and more accurate dual recognition for encoded microbeads in multiplexed utilization. The experimental results demonstrate that the single microbead can be decoded in two optical channels. Multiplexed analysis and contrast adsorption experiment of anti-IgG verified the availability and specificity of dual-channel-coded microbeads in bioanalysis. In gradient detection of anti-IgG, we obtained the linear concentration response to target biomolecules from 3.125 × 10 -10 M to 1 × 10 -8 M, and the limit of detection was calculated to be 2.91 × 10 -11 M. Copyright © 2018 Elsevier B.V. All rights reserved.
Freeing Worldview's development process: Open source everything!
NASA Astrophysics Data System (ADS)
Gunnoe, T.
2016-12-01
Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
NASA Astrophysics Data System (ADS)
Giordano, V.; Chisari, C.; Rizzano, G.; Latour, M.
2017-10-01
The main aim of this work is to understand how the prediction of the seismic performance of moment-resisting (MR) steel frames depends on the modelling of their dissipative zones when the structure geometry (number of stories and bays) and seismic excitation source vary. In particular, a parametric analysis involving 4 frames was carried out, and, for each one, the full-strength beam-to-column connections were modelled according to 4 numerical approaches with different degrees of sophistication (Smooth Hysteretic Model, Bouc-Wen, Hysteretic and simple Elastic-Plastic models). Subsequently, Incremental Dynamic Analyses (IDA) were performed by considering two different earthquakes (Spitak and Kobe). The preliminary results collected so far pointed out that the influence of the joint modelling on the overall frame response is negligible up to interstorey drift ratio values equal to those conservatively assumed by the codes to define conventional collapse (0.03 rad). Conversely, if more realistic ultimate interstorey drift values are considered for the q-factor evaluation, the influence of joint modelling can be significant, and thus may require accurate modelling of its cyclic behavior.
Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization
NASA Astrophysics Data System (ADS)
Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin
2005-02-01
This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.
Coherently coupled high-power fiber arrays
NASA Astrophysics Data System (ADS)
Anderegg, Jesse; Brosnan, Stephen; Cheung, Eric; Epp, Paul; Hammons, Dennis; Komine, Hiroshi; Weber, Mark; Wickham, Michael
2006-02-01
A four-element fiber array has demonstrated 470 watts of coherently phased, linearly polarized light energy in a single far-field spot. Each element consists of a single-mode fiber-amplifier chain. Phase control of each element is achieved with a Lithium-Niobate phase modulator. A master laser provides a linearly polarized, narrow linewidth signal that is split into five channels. Four channels are individually amplified using polarization maintaining fiber power amplifiers. The fifth channel is used as a reference arm. It is frequency shifted and then combined interferometrically with a portion of each channel's signal. Detectors sense the heterodyne modulation signal, and an electronics circuit measures the relative phase for each channel. Compensating adjustments are then made to each channel's phase modulator. This effort represents the results of a multi-year effort to achieve high power from a single element fiber amplifier and to understand the important issues involved in coherently combining many individual elements to obtain sufficient optical power for directed energy weapons. Northrop Grumman Corporation and the High Energy Laser Joint Technology Office jointly sponsored this work.
PPLN-waveguide-based polarization entangled QKD simulator
NASA Astrophysics Data System (ADS)
Gariano, John; Djordjevic, Ivan B.
2017-08-01
We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.
Hyaluronan modulates TRPV1 channel opening, reducing peripheral nociceptor activity and pain
Caires, Rebeca; Luis, Enoch; Taberner, Francisco J.; Fernandez-Ballester, Gregorio; Ferrer-Montiel, Antonio; Balazs, Endre A.; Gomis, Ana; Belmonte, Carlos; de la Peña, Elvira
2015-01-01
Hyaluronan (HA) is present in the extracellular matrix of all body tissues, including synovial fluid in joints, in which it behaves as a filter that buffers transmission of mechanical forces to nociceptor nerve endings thereby reducing pain. Using recombinant systems, mouse-cultured dorsal root ganglia (DRG) neurons and in vivo experiments, we found that HA also modulates polymodal transient receptor potential vanilloid subtype 1 (TRPV1) channels. HA diminishes heat, pH and capsaicin (CAP) responses, thus reducing the opening probability of the channel by stabilizing its closed state. Accordingly, in DRG neurons, HA decreases TRPV1-mediated impulse firing and channel sensitization by bradykinin. Moreover, subcutaneous HA injection in mice reduces heat and capsaicin nocifensive responses, whereas the intra-articular injection of HA in rats decreases capsaicin joint nociceptor fibres discharge. Collectively, these results indicate that extracellular HA reduces the excitability of the ubiquitous TRPV1 channel, thereby lowering impulse activity in the peripheral nociceptor endings underlying pain. PMID:26311398
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strassner, II, Bernd H.; Liedtke, Richard; McDonald, Jacob Jeremiah
The various technologies presented herein relate to utilizing a sealing layer of malleable material to seal gaps, etc., at a joint between edges of a waveguide channel formed in a first plate and a surface of a clamping plate. A compression pad is included in the surface of the clamping plate and is dimensioned such that the upper surface of the pad is less than the area of the waveguide channel opening on the first plate. The sealing layer is placed between the waveguide plate and the clamping plate, and during assembly of the waveguide module, the compression pad deformsmore » a portion of the sealing layer such that it ingresses into the waveguide channel opening. Deformation of the sealing layer results in the gaps, etc., to be filled, improving the operational integrity of the joint.« less
Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo
1998-01-01
It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250
Quantum coding with finite resources.
Tomamichel, Marco; Berta, Mario; Renes, Joseph M
2016-05-09
The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.
Quantum coding with finite resources
Tomamichel, Marco; Berta, Mario; Renes, Joseph M.
2016-01-01
The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
ERIC Educational Resources Information Center
Bates, A. W.
The JANUS (Joint Academic Network Using Satellite) satellite network is being planned to link European institutions wishing to jointly produce distance teaching materials. Earth stations with capabilities for transmit/receive functions, voice/data functions, two 64 kbs channels, and connection to local telephone exchange and computer networks will…
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
Fenton, Susan H; Benigni, Mary Sue
2014-01-01
The transition from ICD-9-CM to ICD-10-CM/PCS is expected to result in longitudinal data discontinuities, as occurred with cause-of-death in 1999. The General Equivalence Maps (GEMs), while useful for suggesting potential maps do not provide guidance regarding the frequency of any matches. Longitudinal data comparisons can only be reliable if they use comparability ratios or factors which have been calculated using records coded in both classification systems. This study utilized 3,969 de-identified dually coded records to examine raw comparability ratios, as well as the comparability ratios between the Joint Commission Core Measures. The raw comparability factor results range from 16.216 for Nicotine dependence, unspecified, uncomplicated to 118.009 for Chronic obstructive pulmonary disease, unspecified. The Joint Commission Core Measure comparability factor results range from 27.15 for Acute Respiratory Failure to 130.16 for Acute Myocardial Infarction. These results indicate significant differences in comparability between ICD-9-CM and ICD-10-CM code assignment, including when the codes are used for external reporting such as the Joint Commission Core Measures. To prevent errors in decision-making and reporting, all stakeholders relying on longitudinal data for measure reporting and other purposes should investigate the impact of the conversion on their data.
Position-based coding and convex splitting for private communication over quantum channels
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Ezzedine, S. M.; Vorobiev, O.; Antoun, T.; Woods, M. T.
2017-12-01
The focus of this study is to investigate the effect of the non-linear material properties on synthetic waveforms at receivers located within the elastic region near the non-linear zone around energetic chemical explosions. The primary goal is to characterize the effect of porosity and joint properties. The joint sizes are typically small compared with the wavelength represented by the computational grid, so the calculations become time consuming to properly represent the fidelity of the calculations. In this study, we use GEODYN-L Lagrangian code, where the joints are included explicitly. We simulate a suite of synthetics for chemical explosions in granite, and varying the porosity and joint orientation. Using the generated synthetic waveforms in the elastic region, we calculate displacement spectra and compare them with homogenous medium solutions (i.e., free of porosity and joints). We are attempting to develop a set of correction factors necessary to apply in various field (emplacement) conditions so that the spectral characteristics can be compared to those predicted by the Mueller-Murphy (MM, 1971; Saikia, 2017) and other source functions (Denny and Johnson, 1991; Ford and Walter, 2013) near the elastic radii. Future investigations will include similar analysis for the nuclear explosions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
A Benchmark Dataset for SSVEP-Based Brain-Computer Interfaces.
Wang, Yijun; Chen, Xiaogang; Gao, Xiaorong; Gao, Shangkai
2017-10-01
This paper presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain- computer interface (BCI) speller. The dataset consists of 64-channel Electroencephalogram (EEG) data from 35 healthy subjects (8 experienced and 27 naïve) while they performed a cue-guided target selecting task. The virtual keyboard of the speller was composed of 40 visual flickers, which were coded using a joint frequency and phase modulation (JFPM) approach. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The phase difference between two adjacent frequencies was . For each subject, the data included six blocks of 40 trials corresponding to all 40 flickers indicated by a visual cue in a random order. The stimulation duration in each trial was five seconds. The dataset can be used as a benchmark dataset to compare the methods for stimulus coding and target identification in SSVEP-based BCIs. Through offline simulation, the dataset can be used to design new system diagrams and evaluate their BCI performance without collecting any new data. The dataset also provides high-quality data for computational modeling of SSVEPs. The dataset is freely available fromhttp://bci.med.tsinghua.edu.cn/download.html.
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems
NASA Technical Reports Server (NTRS)
Hearn, Tristan A.
2015-01-01
This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.
Joint source based morphometry identifies linked gray and white matter group differences.
Xu, Lai; Pearlson, Godfrey; Calhoun, Vince D
2009-02-01
We present a multivariate approach called joint source based morphometry (jSBM), to identify linked gray and white matter regions which differ between groups. In jSBM, joint independent component analysis (jICA) is used to decompose preprocessed gray and white matter images into joint sources and statistical analysis is used to determine the significant joint sources showing group differences and their relationship to other variables of interest (e.g. age or sex). The identified joint sources are groupings of linked gray and white matter regions with common covariation among subjects. In this study, we first provide a simulation to validate the jSBM approach. To illustrate our method on real data, jSBM is then applied to structural magnetic resonance imaging (sMRI) data obtained from 120 chronic schizophrenia patients and 120 healthy controls to identify group differences. JSBM identified four joint sources as significantly associated with schizophrenia. Linked gray-white matter regions identified in each of the joint sources included: 1) temporal--corpus callosum, 2) occipital/frontal--inferior fronto-occipital fasciculus, 3) frontal/parietal/occipital/temporal--superior longitudinal fasciculus and 4) parietal/frontal--thalamus. Age effects on all four joint sources were significant, but sex effects were significant only for the third joint source. Our findings demonstrate that jSBM can exploit the natural linkage between gray and white matter by incorporating them into a unified framework. This approach is applicable to a wide variety of problems to study linked gray and white matter group differences.
González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen
2016-05-01
This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dose ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
González-López, Antonio, E-mail: antonio.gonzalez7@carm.es; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen
Purpose: This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Methods: Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dosemore » ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. Results: For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Conclusions: Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.« less
Starn, J. Jeffrey; Stone, Janet Radway
2005-01-01
Generic ground-water-flow simulation models show that geohydrologic factors?fracture types, fracture geometry, and surficial materials?affect the size, shape, and location of source-water areas for bedrock wells. In this study, conducted by the U.S. Geological Survey in cooperation with the Connecticut Department of Public Health, ground-water flow was simulated to bedrock wells in three settings?on hilltops and hillsides with no surficial aquifer, in a narrow valley with a surficial aquifer, and in a broad valley with a surficial aquifer?to show how different combinations of geohydrologic factors in different topographic settings affect the dimensions and locations of source-water areas in Connecticut. Three principal types of fractures are present in bedrock in Connecticut?(1) Layer-parallel fractures, which developed as partings along bedding in sedimentary rock and compositional layering or foliation in metamorphic rock (dips of these fractures can be gentle or steep); (2) unroofing joints, which developed as strain-release fractures parallel to the land surface as overlying rock was removed by erosion through geologic time; and (3) cross fractures and joints, which developed as a result of tectonically generated stresses that produced typically near-vertical or steeply dipping fractures. Fracture geometry is defined primarily by the presence or absence of layering in the rock unit, and, if layered, by the angle of dip in the layering. Where layered rocks dip steeply, layer-parallel fracturing generally is dominant; unroofing joints also are typically well developed. Where layered rocks dip gently, layer-parallel fracturing also is dominant, and connections among these fractures are provided only by the cross fractures. In gently dipping rocks, unroofing joints generally do not form as a separate fracture set; instead, strain release from unroofing has occurred along gently dipping layer-parallel fractures, enhancing their aperture. In nonlayered and variably layered rocks, layer-parallel fracturing is absent or poorly developed; fracturing is dominated by well-developed subhorizontal unroofing joints and steeply dipping, tectonically generated fractures and (or) cooling joints. Cross fractures (or cooling joints) in nonlayered and variably layered rocks have more random orientations than in layered rocks. Overall, nonlayered or variably layered rocks do not have a strongly developed fracture direction. Generic ground-water-flow simulation models showed that fracture geometry and other geohydrologic factors affect the dimensions and locations of source-water areas for bedrock wells. In general, source-water areas to wells reflect the direction of ground-water flow, which mimics the land-surface topography. Source-water areas to wells in a hilltop setting were not affected greatly by simulated fracture zones, except for an extensive vertical fracture zone. Source-water areas to wells in a hillside setting were not affected greatly by simulated fracture zones, except for the combination of a subhorizontal fracture zone and low bedrock vertical hydraulic conductivity, as might be the case where an extensive subhorizontal fracture zone is not connected or is poorly connected to the surface through vertical fractures. Source-water areas to wells in a narrow valley setting reflect complex ground-water-flow paths. The typical flow path originates in the uplands and passes through either till or bedrock into the surficial aquifer, although only a small area of the surficial aquifer actually contributes water to the well. Source-water areas in uplands can include substantial areas on both sides of a river. Source-water areas for wells in this setting are affected mainly by the rate of ground-water recharge and by the degree of anisotropy. Source-water areas to wells in a broad valley setting (bedrock with a low angle of dip) are affected greatly by fracture properties. The effect of a given fracture is to channel the
NASA Astrophysics Data System (ADS)
Kunieda, Satoshi
2017-09-01
We report the status of the R-matrix code AMUR toward consistent cross-section evaluation and covariance analysis for the light-mass nuclei. The applicable limit of the code is extended by including computational capability for the charged-particle elastic scattering cross-sections and the neutron capture cross-sections as example results are shown in the main texts. A simultaneous analysis is performed on the 17O compound system including the 16O(n,tot) and 13C(α,n)16O reactions together with the 16O(n,n) and 13C(α,α) scattering cross-sections. It is found that a large theoretical background is required for each reaction process to obtain a simultaneous fit with all the experimental cross-sections we analyzed. Also, the hard-sphere radii should be assumed to be different from the channel radii. Although these are technical approaches, we could learn roles and sources of the theoretical background in the standard R-matrix.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
Trellis coded modulation for 4800-9600 bps transmission over a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1986-01-01
The combination of trellis coding and multiple phase-shift-keyed (MPSK) signalling with the addition of asymmetry to the signal set is discussed with regard to its suitability as a modulation/coding scheme for the fading mobile satellite channel. For MPSK, introducing nonuniformity (asymmetry) into the spacing between signal points in the constellation buys a further improvement in performance over that achievable with trellis coded symmetric MPSK, all this without increasing average or peak power, or changing the bandwidth constraints imposed on the system. Whereas previous contributions have considered the performance of trellis coded modulation transmitted over an additive white Gaussian noise (AWGN) channel, the emphasis in the paper is on the performance of trellis coded MPSK in the fading environment. The results will be obtained by using a combination of analysis and simulation. It will be assumed that the effect of the fading on the phase of the received signal is fully compensated for either by tracking it with some form of phase-locked loop or with pilot tone calibration techniques. Thus, results will reflect only the degradation due to the effect of the fading on the amplitude of the received signal. Also, we shall consider only the case where interleaving/deinterleaving is employed to further combat the fading. This allows for considerable simplification of the analysis and is of great practical interest. Finally, the impact of the availability of channel state information on average bit error probability performance is assessed.
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere
NASA Astrophysics Data System (ADS)
Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael
2017-05-01
We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.
2004-02-26
Code R and Code D hosted NESC Principal Engineer Mike Kirsch who is Program Leader for Composite Crew Module (CCM). The purpose of the visit was to review/observe experiments that GRC is performing in support of the CCM program. The test object is the critical Low Impact Docking System/Tunnel interface joint that links the metal docking ring with the polymer composite tunnel element of the crew module pressure vessel. The rectangular specimens simulated the splice joint between the aluminum and the PMC sheets, including a PMC doubler sheet. GRC was selected for these tests due to our expertise in composite testing and our ability to perform 3D fullfield displacement and strain measurements of the complex bond geometry using digital image correlation. The specimens performed above their minimum load requirements and the full field strain measurements showed the strain levels at the critical bond line. This work is part of a joint Code D & R investigation.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.
2015-10-01
We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.
Zero Forcing Conditions for Nonlinear channel Equalisation using a pre-coding scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arfa, Hichem; Belghith, Safya; El Asmi, Sadok
2009-03-05
This paper shows how we can present a zero forcing conditions for a nonlinear channel equalisation. These zero forcing conditions based on the rank of nonlinear system are issued from an algebraic approach based on the module theoretical approach, in which the rank of nonlinear channel is clearly defined. In order to improve the performance of equalisation and reduce the complexity of used nonlinear systems, we will apply a pre-coding scheme. Theoretical results are given and computer simulation is used to corroborate the theory.
Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes
NASA Astrophysics Data System (ADS)
Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas
2018-05-01
An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.
Joint source based morphometry identifies linked gray and white matter group differences
Xu, Lai; Pearlson, Godfrey; Calhoun, Vince D.
2009-01-01
We present a multivariate approach called joint source based morphometry (jSBM), to identify linked gray and white matter regions which differ between groups. In jSBM, joint independent component analysis (jICA) is used to decompose preprocessed gray and white matter images into joint sources and statistical analysis is used to determine the significant joint sources showing group differences and their relationship to other variables of interest (e.g. age or sex). The identified joint sources are groupings of linked gray and white matter regions with common covariation among subjects. In this study, we first provide a simulation to validate the jSBM approach. To illustrate our method on real data, jSBM is then applied to structural magnetic resonance imaging (sMRI) data obtained from 120 chronic schizophrenia patients and 120 healthy controls to identify group differences. JSBM identified four joint sources as significantly associated with schizophrenia. Linked gray–white matter regions identified in each of the joint sources included: 1) temporal — corpus callosum, 2) occipital/frontal — inferior fronto-occipital fasciculus, 3) frontal/parietal/occipital/temporal —superior longitudinal fasciculus and 4) parietal/frontal — thalamus. Age effects on all four joint sources were significant, but sex effects were significant only for the third joint source. Our findings demonstrate that jSBM can exploit the natural linkage between gray and white matter by incorporating them into a unified framework. This approach is applicable to a wide variety of problems to study linked gray and white matter group differences. PMID:18992825
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradler, Kamil; Hayden, Patrick; Touchette, Dave
Coding theorems in quantum Shannon theory express the ultimate rates at which a sender can transmit information over a noisy quantum channel. More often than not, the known formulas expressing these transmission rates are intractable, requiring an optimization over an infinite number of uses of the channel. Researchers have rarely found quantum channels with a tractable classical or quantum capacity, but when such a finding occurs, it demonstrates a complete understanding of that channel's capabilities for transmitting classical or quantum information. Here we show that the three-dimensional capacity region for entanglement-assisted transmission of classical and quantum information is tractable formore » the Hadamard class of channels. Examples of Hadamard channels include generalized dephasing channels, cloning channels, and the Unruh channel. The generalized dephasing channels and the cloning channels are natural processes that occur in quantum systems through the loss of quantum coherence or stimulated emission, respectively. The Unruh channel is a noisy process that occurs in relativistic quantum information theory as a result of the Unruh effect and bears a strong relationship to the cloning channels. We give exact formulas for the entanglement-assisted classical and quantum communication capacity regions of these channels. The coding strategy for each of these examples is superior to a naieve time-sharing strategy, and we introduce a measure to determine this improvement.« less
Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1972-01-01
The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.
Han, Dahai; Gu, Yanjie; Zhang, Min
2017-08-10
An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.
Wireless visual sensor network resource allocation using cross-layer optimization
NASA Astrophysics Data System (ADS)
Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.
2009-01-01
In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.
Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.
Waheed, Khuram; Salem, Fathi M
2005-07-01
Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Breaking Gaussian incompatibility on continuous variable quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi; Kiukas, Jukka, E-mail: jukka.kiukas@aber.ac.uk; Schultz, Jussi, E-mail: jussi.schultz@gmail.com
2015-08-15
We characterise Gaussian quantum channels that are Gaussian incompatibility breaking, that is, transform every set of Gaussian measurements into a set obtainable from a joint Gaussian observable via Gaussian postprocessing. Such channels represent local noise which renders measurements useless for Gaussian EPR-steering, providing the appropriate generalisation of entanglement breaking channels for this scenario. Understanding the structure of Gaussian incompatibility breaking channels contributes to the resource theory of noisy continuous variable quantum information protocols.
Cross-Layer Design for Space-Time coded MIMO Systems over Rice Fading Channel
NASA Astrophysics Data System (ADS)
Yu, Xiangbin; Zhou, Tingting; Liu, Xiaoshuai; Yin, Xin
A cross-layer design (CLD) scheme for space-time coded MIMO systems over Rice fading channel is presented by combining adaptive modulation and automatic repeat request, and the corresponding system performance is investigated well. The fading gain switching thresholds subject to a target packet error rate (PER) and fixed power constraint are derived. According to these results, and using the generalized Marcum Q-function, the calculation formulae of the average spectrum efficiency (SE) and PER of the system with CLD are derived. As a result, closed-form expressions for average SE and PER are obtained. These expressions include some existing expressions in Rayleigh channel as special cases. With these expressions, the system performance in Rice fading channel is evaluated effectively. Numerical results verify the validity of the theoretical analysis. The results show that the system performance in Rice channel is effectively improved as Rice factor increases, and outperforms that in Rayleigh channel.
Hall, Aaron C.; Hosking, F. Michael ,; Reece, Mark
2003-06-24
A capillary test specimen, method, and system for visualizing and quantifying capillary flow of liquids under realistic conditions, including polymer underfilling, injection molding, soldering, brazing, and casting. The capillary test specimen simulates complex joint geometries and has an open cross-section to permit easy visual access from the side. A high-speed, high-magnification camera system records the location and shape of the moving liquid front in real-time, in-situ as it flows out of a source cavity, through an open capillary channel between two surfaces having a controlled capillary gap, and into an open fillet cavity, where it subsequently forms a fillet on free surfaces that have been configured to simulate realistic joint geometries. Electric resistance heating rapidly heats the test specimen, without using a furnace. Image-processing software analyzes the recorded images and calculates the velocity of the moving liquid front, fillet contact angles, and shape of the fillet's meniscus, among other parameters.
Design criteria for noncoherent Gaussian channels with MFSK signaling and coding
NASA Technical Reports Server (NTRS)
Butman, S. A.; Levitt, B. K.; Bar-David, I.; Lyon, R. F.; Klass, M. J.
1976-01-01
This paper presents data and criteria to assess and guide the design of modems for coded noncoherent communication systems subject to practical system constraints of power, bandwidth, noise spectral density, coherence time, and number of orthogonal signals M. Three basic receiver types are analyzed for the noncoherent multifrequency-shift keying (MFSK) additive white Gaussian noise channel: hard decision, unquantized (optimum), and quantized (soft decision). Channel capacity and computational cutoff rate are computed for each type and presented as functions of the predetection signal-to-noise ratio and the number of orthogonal signals. This relates the channel constraints of power, bandwidth, coherence time, and noise power to the optimum choice of signal duration and signal number.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
Independent Validation and Verification of automated information systems in the Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunteman, W.J.; Caldwell, R.
1994-07-01
The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
NASA Astrophysics Data System (ADS)
Zhou, Hongying; Yuan, Xuanjun; Zhang, Youyan; Dong, Wentong; Liu, Song
2016-11-01
It is of great importance for petroleum exploration to study the sedimentary features and the growth pattern of shoal water deltas in lake basins. Taking spatio-temporal remote sensing images as the principal data source, combined with field sedimentation survey, a quantitative research on the modern deposition of Ganjiang delta in the Poyang Lake Basin is described in this paper. Using 76 multi-temporal and multi-type remote sensing images acquired from 1973 to 2015, combined with field sedimentation survey, remote sensing interpretation analysis was conducted on the sedimentary facies of the Ganjiang delta. It is found that that the current Poyang Lake mainly consists of three types of sand body deposits including deltaic deposit, overflow channel deposit, and aeolian deposit, and the distribution of sand bodies was affected by the above three types of depositions jointly. The mid-branch channels of the Ganjiang delta increased on an exponential growth rhythm. The main growth pattern of the Ganjiang delta is dendritic and reticular, and the distributary channel mostly arborizes at lake inlet and was reworked to be reticulatus at late stage.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Oleg P.; Semin, Ilya A.; Potapov, Victor N.
Gamma-ray imaging is the most important way to identify unknown gamma-ray emitting objects in decommissioning, security, overcoming accidents. Over the past two decades a system for producing of gamma images in these conditions became more or less portable devices. But in recent years these systems have become the hand-held devices. This is very important, especially in emergency situations, and measurements for safety reasons. We describe the first integrated hand-held instrument for emergency and security applications. The device is based on the coded aperture image formation, position sensitive gamma-ray (X-ray) detector Medipix2 (detectors produces by X-ray Imaging Europe) and tablet computer.more » The development was aimed at creating a very low weight system with high angular resolution. We present some sample gamma-ray images by camera. Main estimated parameters of the system are the following. The field of view video channel ∼ 490 deg. The field of view gamma channel ∼ 300 deg. The sensitivity of the system with a hexagonal mask for the source of Cs-137 (Eg = 662 keV), is in units of dose D ∼ 100 mR. This option is less then order of magnitude worse than for the heavy, non-hand-held systems (e.g., gamma-camera Cartogam, by Canberra.) The angular resolution of the gamma channel for the sources of Cs-137 (Eg = 662 keV) is about 1.20 deg. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun
2004-05-01
Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less
Adaptive and reliably acknowledged FSO communications
NASA Astrophysics Data System (ADS)
Fitz, Michael P.; Halford, Thomas R.; Kose, Cenk; Cromwell, Jonathan; Gordon, Steven
2015-05-01
Atmospheric turbulence causes the receive signal intensity on free space optical (FSO) communication links to vary over time. Scintillation fades can stymie connectivity for milliseconds at a time. To approach the information-theoretic limits of communication in such time-varying channels, it necessary to either code across extremely long blocks of data - thereby inducing unacceptable delays - or to vary the code rate according to the instantaneous channel conditions. We describe the design, laboratory testing, and over-the-air testing of an FSO modem that employs a protocol with adaptive coded modulation (ACM) and hybrid automatic repeat request. For links with fixed throughput, this protocol provides a 10dB reduction in the required received signal-to-noise ratio (SNR); for links with fixed range, this protocol provides the greater than a 3x increase in throughput. Independent U.S. Government tests demonstrate that our protocol effectively adapts the code rate to match the instantaneous channel conditions. The modem is able to provide throughputs in excess of 850 Mbps on links with ranges greater than 15 kilometers.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
Osteoarthritis of the Foot and Ankle
... in or near the joint Difficulty walking or bending the joint Some patients with osteoarthritis also develop ... ps.position.alert.message}} Getting your location, one moment... Please enter a 5-digit zip code. Please ...
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation.
Waheed, Maham; Ahmad, Rizwan; Ahmed, Waqas; Drieberg, Micheal; Alam, Muhammad Mahtab
2018-02-13
The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs). Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor's battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE) and packet error rate (PER) are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC) coding the cooperative link.
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation
Waheed, Maham; Ahmad, Rizwan; Ahmed, Waqas
2018-01-01
The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs). Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor’s battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE) and packet error rate (PER) are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC) coding the cooperative link. PMID:29438278
Sadoghi, Patrick; Leithner, Andreas; Labek, Gerold
2013-09-01
Worldwide joint arthroplasty registers are instrumental to screen for complications or implant failures. In order to achieve comparable results a similar classification dataset is essential. The authors therefore present the European Federation of National Associations of Orthopaedics and Traumatology (EFORT) European Arthroplasty Register (EAR) minimal dataset for primary and revision joint arthroplasty. Main parameters include the following: date of operation, country, hospital ID-code, patient's name and prename, birthday, identification code of the implant, gender, diagnosis, preoperations, type of prosthesis (partial, total), side, cementation technique, use of antibiotics in the cement, surgical approach, and others specifically related to the affected joint. The authors believe that using this minimal dataset will improve the chance for a worldwide comparison of arthroplasty registers and ask future countries for implementation. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rath, V.; Wolf, A.; Bücker, H. M.
2006-10-01
Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.
The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert
2007-01-01
CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
[Example of product development by industry and research solidarity].
Seki, Masayoshi
2014-01-01
When the industrial firms develop the product, the research result from research institutions is used or to reflect the ideas from users on the developed product would be significant in order to improve the product. To state the software product which developed jointly as an example to describe the adopted development technique and its result, and to consider the modality of the industry solidarity seen from the company side and joint development. The software development methods have the merit and demerit and necessary to choose the optimal development technique by the system which develops. We have been jointly developed the dose distribution browsing software. As the software development method, we adopted the prototype model. In order to display the dose distribution information, it is necessary to load four objects which are CT-Image, Structure Set, RT-Plan, and RT-Dose, are displayed in a composite manner. The prototype model which is the development technique was adopted by this joint development was optimal especially to develop the dose distribution browsing software. In a prototype model, since the detail design was created based on the program source code after the program was finally completed, there was merit on the period shortening of document written and consist in design and implementation. This software eventually opened to the public as an open source. Based on this developed prototype software, the release version of the dose distribution browsing software was developed. Developing this type of novelty software, it normally takes two to three years, but since the joint development was adopted, it shortens the development period to one year. Shortening the development period was able to hold down to the minimum development cost for a company and thus, this will be reflected to the product price. The specialists make requests on the product from user's point of view are important, but increase in specialists as professionals for product development will increase the expectations to develop a product to meet the users demand.
Frequency Hopping, Multiple Frequency-Shift Keying, Coding, and Optimal Partial-Band Jamming.
1982-08-01
receivers appropriate for these two strategies. Each receiver is noncoherent (a coherent receiver is generally impractical) and implements hard...Advances in Coding and Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple- . Access Interference, in A. J. Viterbi...Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple-Access interference, in A. J. Viterbi, ed., Advances in Coumunication
Irma 5.1 multisensor signature prediction model
NASA Astrophysics Data System (ADS)
Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles
2006-05-01
The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and includes polarization effects, time jittering, speckle effect, and atmospheric turbulence. More importantly, the Munitions Directorate has funded three field tests to verify and validate the re-engineered ladar channel. Each of the field tests was comprehensive and included one month of sensor characterization and a week of data collection. After each field test, the analysis included comparisons of Irma predicted signatures with measured signatures, and if necessary, refining the model to produce realistic imagery. This paper will focus on two areas of the Irma 5.1 development effort: report on the analysis results of the validation and verification of the Irma 5.1 ladar channel, and the software development plan and validation efforts of the Irma passive channel. As scheduled, the Irma passive code is being re-engineered using object oriented language (C++), and field data collection is being conducted to validate the re-engineered passive code. This software upgrade will remove many constraints and limitations of the legacy code including limits on image size and facet counts. The field test to validate the passive channel is expected to be complete in the second quarter of 2006.
Experimental demonstration of entanglement-assisted coding using a two-mode squeezed vacuum state
NASA Astrophysics Data System (ADS)
Mizuno, Jun; Wakui, Kentaro; Furusawa, Akira; Sasaki, Masahide
2005-01-01
We have experimentally realized the scheme initially proposed as quantum dense coding with continuous variables [
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
A note on the R sub 0-parameter for discrete memoryless channels
NASA Technical Reports Server (NTRS)
Mceliece, R. J.
1980-01-01
An explicit class of discrete memoryless channels (q-ary erasure channels) is exhibited. Practical and explicit coded systems of rate R with R/R sub o as large as desired can be designed for this class.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
We explore a time-dependent energy dissipation of the energetic electrons in the inhomogeneous intergalactic medium (IGM) during the epoch of cosmic reionization. In addition to the atomic processes, we take into account the inverse Compton (IC) scattering of the electrons on the cosmic microwave background photons, which is the dominant channel of energy loss for electrons with energies above a few MeV. We show that: (1) the effect on the IGM has both local (atomic processes) and non-local (IC radiation) components; (2) the energy distribution between hydrogen and helium ionizations depends on the initial energy of an electron; (3) themore » local baryon overdensity significantly affects the fractions of energy distributed in each channel; and (4) the relativistic effect of the atomic cross-section becomes important during the epoch of cosmic reionization. We release our code as open source for further modification by the community.« less
NASA Astrophysics Data System (ADS)
Granade, Christopher; Combes, Joshua; Cory, D. G.
2016-03-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
NASA Astrophysics Data System (ADS)
Marek, Repka
2015-01-01
The original McEliece PKC proposal is interesting thanks to its resistance against all known attacks, even using quantum cryptanalysis, in an IND-CCA2 secure conversion. Here we present a generic implementation of the original McEliece PKC proposal, which provides test vectors (for all important intermediate results), and also in which a measurement tool for side-channel analysis is employed. To our best knowledge, this is the first such an implementation. This Calculator is valuable in implementation optimization, in further McEliece/Niederreiter like PKCs properties investigations, and also in teaching. Thanks to that, one can, for example, examine side-channel vulnerability of a certain implementation, or one can find out and test particular parameters of the cryptosystem in order to make them appropriate for an efficient hardware implementation. This implementation is available [1] in executable binary format, and as a static C++ library, as well as in form of source codes, for Linux and Windows operating systems.
NASA Astrophysics Data System (ADS)
van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro
2017-08-01
This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.
NASA Astrophysics Data System (ADS)
Eneman, Geert; De Keersgieter, An; Witters, Liesbeth; Mitard, Jerome; Vincent, Benjamin; Hikavyy, Andriy; Loo, Roger; Horiguchi, Naoto; Collaert, Nadine; Thean, Aaron
2013-04-01
The interaction between two stress techniques, strain-relaxed buffers (SRBs) and epitaxial source/drain stressors, is studied on short, Si1-xGex- and Ge-channel planar transistors. This work focuses on the longitudinal channel stress generated by these two techniques. Unlike for unstrained silicon-channel transistors, for strained channels on top of a strain-relaxed buffer a source/drain stressor without recess generates similar longitudinal channel stress than source/drain stressors with a deep recess. The least efficient stress transfer is obtained for source/drain stressors with a small recess that removes only the strained channel, not the substrate underneath. These trends are explained by a trade-off between elastic relaxation of the strained-channel during source/drain recess and the increased stress generation of thicker source/drain stressors. For Ge-channel pFETs, GeSn source/drains and Si1-xGex strain-relaxed buffers are efficient stressors for mobility enhancement. The former is more efficient for gate-last schemes than for gate-first, while the stress generated by the SRB is found to be independent of the gate-scheme.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Location and Navigation with Ultra-Wideband Signals
2012-06-07
Coherent vs. Noncoherent Combination 26 F Ranging with Multi-Band UWB Signals: Random Phase Ratation 29 F.1 MB-OFDM System Model...adopted to combine the channel information from subbands: the coherent combining and the noncoherent combining. For the coherent combining, estimates of...channel frequency response coefficients for all subbands are jointly used to estimate the time domain channel with Eq. (33). For the noncoherent
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
Lemnos interoperable security project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halbgewachs, Ronald D.
2010-03-01
With the Lemnos framework, interoperability of control security equipment is straightforward. To obtain interoperability between proprietary security appliance units, one or both vendors must now write cumbersome 'translation code.' If one party changes something, the translation code 'breaks.' The Lemnos project is developing and testing a framework that uses widely available security functions and protocols like IPsec - to form a secure communications channel - and Syslog, to exchange security log messages. Using this model, security appliances from two or more different vendors can clearly and securely exchange information, helping to better protect the total system. Simplify regulatory compliance inmore » a complicated security environment by leveraging the Lemnos framework. As an electric utility, are you struggling to implement the NERC CIP standards and other regulations? Are you weighing the misery of multiple management interfaces against committing to a ubiquitous single-vendor solution? When vendors build their security appliances to interoperate using the Lemnos framework, it becomes practical to match best-of-breed offerings from an assortment of vendors to your specific control systems needs. The Lemnos project is developing and testing a framework that uses widely available open-source security functions and protocols like IPsec and Syslog to create a secure communications channel between appliances in order to exchange security data.« less
The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results
NASA Technical Reports Server (NTRS)
Katz, J.
1982-01-01
The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
2011-09-30
channel interference mitigation for underwater acoustic MIMO - OFDM . 3) Turbo equalization for OFDM modulated physical layer network coding. 4) Blind CFO...Underwater Acoustic MIMO - OFDM . MIMO - OFDM has been actively studied for high data rate communications over the bandwidthlimited underwater acoustic...with the cochannel interference (CCI) due to parallel transmissions in MIMO - OFDM . Our proposed receiver has the following components: 1
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
NASA Astrophysics Data System (ADS)
Srebro, Haim
2018-05-01
International and cadastral boundaries are important for ensuring stable legal territorial matters. This article deals with the long-term location and management of boundaries in rivers and the depiction of the rivers on cartographic materials. A few countries have agreed that the boundary will not follow changes in the river (like in the Mongolia-China Border Treaty), whereas most agree that the boundary will follow slow, natural and gradual changes in the river (like is stated in the Israel-Jordan Peace Treaty). The international boundary under the British Mandate between Palestine and Trans-Jordan in the Jordan and Yarmuk rivers was defined in 1922. The cadastral boundaries were defined in these rivers in the 1930s along the international boundary. For more than 70 years, until the Israel-Jordan 1994 Peace Treaty, the rivers have changed their channels east and westward to distances up to hundreds of meters. During that period the mandatory boundaries in these rivers changed their political status to the armistice lines, the cease-fire lines, and to international boundaries between sovereign states. These lines were usually delineated on topographic maps in the rivers, drawn by cartographers following contemporary map revision. During that entire period the cadastral boundaries were not changed in order to adapt them to the actual position of the rivers and to the delineated international boundaries. Owing to large water works on both rivers, including the construction of dams and diversion channels in order to meet the increasing needs of the population on both sides, the water flow of the rivers decreased dramatically to less than one tenth of the original natural flow. The population today is more than ten times than it used to be under the British Mandate. The changes in the water channels during the last 20 years since the 1994 peace treaty are in the magnitude of 10 meters versus hundreds of meters in the past. In addition, intensive land cultivation adjacent to the river banks has stabilized them. In 2000, due to the construction of a dam on the Yarmuk River, both sides jointly fixed coordinates of the relevant boundary line in the river according to the boundary delineation in the peace treaty. The accumulated artificial changes along both rivers have cancelled their natural behavior and have influenced the changes in the river channels. This may justify an initiative to fix the boundary lines in both rivers by coordinates according to the peace treaty delimitation, enabling the cadastral boundaries to be fixed according to the fixed international boundary line. The article analyzes boundary line management in changing rivers in light of development of the legal approach and practice from the time of the Romans until today. It analyzes the special case of the boundary line in the Jordan and Yarmuk rivers, and introduces a proposal for stabilizing this boundary line. The research of the changes of these rivers is based on changes in the depiction of their channels on various kinds of maps and cartographic sources, produced through the last century by many producers. They include British, German, ANZAC, Israeli and Jordanian maps and charts. The cartographic materials include large scale field survey sheets and engineering charts from the 1920s, cadastral charts from the 1930s, topographic maps produced through the last century and orthophoto maps produced since the 1990s, including joint Israeli-Jordanian orthophoto and charts produced by the Joint Boundary Commission as part of the peace agreement and its implementation. The article includes a variety of cartographic examples.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
Program ratings do not predict negative content in commercials on children's channels.
Dale, Lourdes P; Klein, Jordana; DiLoreto, James; Pidano, Anne E; Borto, Jolanta W; McDonald, Kathleen; Olson, Heather; Neace, William P
2011-01-01
The aim of this study was to determine the presence of negative content in commercials airing on 3 children's channels (Disney Channel, Nickelodeon, and Cartoon Network). The 1681 commercials were coded with a reliable coding system and content comparisons were made. Although the majority of the commercials were coded as neutral, negative content was present in 13.5% of commercials. This rate was significantly more than the predicted value of zero and more similar to the rates cited in previous research examining content during sporting events. The rate of negative content was less than, but not significantly different from, the rate of positive content. Thus, our findings did not support our hypothesis that there would be more commercials with positive content than with negative content. Logistic regression analysis indicated that channel, and not rating, was a better predictor of the presence of overall negative content and the presence of violent behaviors. Commercials airing on the Cartoon Network had significantly more negative content, and those airing on Disney Channel had significantly less negative content than the other channels. Within the individual channels, program ratings did not relate to the presence of negative content. Parents cannot assume the content of commercials will be consistent with the program rating or label. Pediatricians and psychologists should educate parents about the potential for negative content in commercials and advocate for a commercials rating system to ensure that there is greater parity between children's programs and the corresponding commercials.
The role of transient receptor potential channels in joint diseases.
Krupkova, O; Zvick, J; Wuertz-Kozak, K
2017-10-10
Transient receptor potential channels (TRP channels) are cation selective transmembrane receptors with diverse structures, activation mechanisms and physiological functions. TRP channels act as cellular sensors for a plethora of stimuli, including temperature, membrane voltage, oxidative stress, mechanical stimuli, pH and endogenous, as well as, exogenous ligands, thereby illustrating their versatility. As such, TRP channels regulate various functions in both excitable and non-excitable cells, mainly by mediating Ca2+ homeostasis. Dysregulation of TRP channels is implicated in many pathologies, including cardiovascular diseases, muscular dystrophies and hyperalgesia. However, the importance of TRP channel expression, physiological function and regulation in chondrocytes and intervertebral disc (IVD) cells is largely unexplored. Osteoarthritis (OA) and degenerative disc disease (DDD) are chronic age-related disorders that significantly affect the quality of life by causing pain, activity limitation and disability. Furthermore, currently available therapies cannot effectively slow-down or stop progression of these diseases. Both OA and DDD are characterised by reduced tissue cellularity, enhanced inflammatory responses and molecular, structural and mechanical alterations of the extracellular matrix, hence affecting load distribution and reducing joint flexibility. However, knowledge on how chondrocytes and IVD cells sense their microenvironment and respond to its changes is still limited. In this review, we introduced six families of mammalian TRP channels, their mechanisms of activation, as well as, activation-driven cellular consequences. We summarised the current knowledge on TRP channel expression and activity in chondrocytes and IVD cells, as well as, the significance of TRP channels as therapeutic targets for the treatment of OA and DDD.
Improving the Multi-Wavelength Capability of Chandra Large Programs
NASA Astrophysics Data System (ADS)
Pacucci, Fabio
2017-09-01
In order to fully exploit the joint Chandra/JWST/HST ventures to detect faint sources, we urgently need an advanced matching algorithm between optical/NIR and X-ray catalogs/images. This will be of paramount importance in bridging the gap between upcoming optical/NIR facilities (JWST) and later X-ray ones (Athena, Lynx). We propose to develop an advanced and automated tool to improve the identification of Chandra X-ray counterparts detected in deep optical/NIR fields based on T-PHOT, a software widely used in the community. The developed code will include more than 20 years in advancements of X-ray data analysis and will be released to the public. Finally, we will release an updated catalog of X-ray sources in the CANDELS regions: a leap forward in our endeavor of charting the Universe.
Binary zone-plate array for a parallel joint transform correlator applied to face recognition.
Kodate, K; Hashimoto, A; Thapliya, R
1999-05-10
Taking advantage of small aberrations, high efficiency, and compactness, we developed a new, to our knowledge, design procedure for a binary zone-plate array (BZPA) and applied it to a parallel joint transform correlator for the recognition of the human face. Pairs of reference and unknown images of faces are displayed on a liquid-crystal spatial light modulator (SLM), Fourier transformed by the BZPA, intensity recorded on an optically addressable SLM, and inversely Fourier transformed to obtain correlation signals. Consideration of the bandwidth allows the relations among the channel number, the numerical aperture of the zone plates, and the pattern size to be determined. Experimentally a five-channel parallel correlator was implemented and tested successfully with a 100-person database. The design and the fabrication of a 20-channel BZPA for phonetic character recognition are also included.
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
Real-time software-based end-to-end wireless visual communications simulation platform
NASA Astrophysics Data System (ADS)
Chen, Ting-Chung; Chang, Li-Fung; Wong, Andria H.; Sun, Ming-Ting; Hsing, T. Russell
1995-04-01
Wireless channel impairments pose many challenges to real-time visual communications. In this paper, we describe a real-time software based wireless visual communications simulation platform which can be used for performance evaluation in real-time. This simulation platform consists of two personal computers serving as hosts. Major components of each PC host include a real-time programmable video code, a wireless channel simulator, and a network interface for data transport between the two hosts. The three major components are interfaced in real-time to show the interaction of various wireless channels and video coding algorithms. The programmable features in the above components allow users to do performance evaluation of user-controlled wireless channel effects without physically carrying out these experiments which are limited in scope, time-consuming, and costly. Using this simulation platform as a testbed, we have experimented with several wireless channel effects including Rayleigh fading, antenna diversity, channel filtering, symbol timing, modulation, and packet loss.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Monodisperse microdroplet generation and stopping without coalescence
Beer, Neil Reginald
2015-04-21
A system for monodispersed microdroplet generation and trapping including providing a flow channel in a microchip; producing microdroplets in the flow channel, the microdroplets movable in the flow channel; providing carrier fluid in the flow channel using a pump or pressure source; controlling movement of the microdroplets in the flow channel and trapping the microdroplets in a desired location in the flow channel. The system includes a microchip; a flow channel in the microchip; a droplet maker that generates microdroplets, the droplet maker connected to the flow channel; a carrier fluid in the flow channel, the carrier fluid introduced to the flow channel by a source of carrier fluid, the source of carrier fluid including a pump or pressure source; a valve connected to the carrier fluid that controls flow of the carrier fluid and enables trapping of the microdroplets.
Monodisperse microdroplet generation and stopping without coalescence
Beer, Neil Reginald
2016-02-23
A system for monodispersed microdroplet generation and trapping including providing a flow channel in a microchip; producing microdroplets in the flow channel, the microdroplets movable in the flow channel; providing carrier fluid in the flow channel using a pump or pressure source; controlling movement of the microdroplets in the flow channel and trapping the microdroplets in a desired location in the flow channel. The system includes a microchip; a flow channel in the microchip; a droplet maker that generates microdroplets, the droplet maker connected to the flow channel; a carrier fluid in the flow channel, the carrier fluid introduced to the flow channel by a source of carrier fluid, the source of carrier fluid including a pump or pressure source; a valve connected to the carrier fluid that controls flow of the carrier fluid and enables trapping of the microdroplets.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
A Survey of Progress in Coding Theory in the Soviet Union. Final Report.
ERIC Educational Resources Information Center
Kautz, William H.; Levitt, Karl N.
The results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967--are described in this report. Noteworthy Soviet contributions are discussed, including codes for the noiseless channel, codes that correct asymetric errors, decoding for…
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali
2013-02-01
Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.
NASA Technical Reports Server (NTRS)
Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.
2012-01-01
The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.
Yamashita, M; Yamashita, A; Ishii, T; Naruo, Y; Nagatomo, M
1998-11-01
A portable recording system was developed for analysis of more than three analog signals collected in field works. Stereo audio recorder, available as consumer products, was made use for a core cornponent of the system. For the two tracks of recording, a multiplexed analog signal is stored on one track, and reference code on the other track. The reference code indicates the start of one cycle for multiplexing and swiching point of each channel. Multiplexed signal is playbacked and decoded with a reference of the code to reconstruct original profiles of the signal. Since commercial stereo recorders have cut DC component off, a fixed reference voltage is inserted in the sequence of multiplexing. Change of voltage at switching from the reference to the data channel is measured from playbacked signal to get the original data with its DC component. Movement of vehicles and human head were analyzed by the system. It was verified to be capable to record and analyze multi-channel signal at a sampling rate more than 10Hz.
Perceptually tuned low-bit-rate video codec for ATM networks
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien
1996-02-01
In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.
Comparison of Three Information Sources for Smoking Information in Electronic Health Records
Wang, Liwei; Ruan, Xiaoyang; Yang, Ping; Liu, Hongfang
2016-01-01
OBJECTIVE The primary aim was to compare independent and joint performance of retrieving smoking status through different sources, including narrative text processed by natural language processing (NLP), patient-provided information (PPI), and diagnosis codes (ie, International Classification of Diseases, Ninth Revision [ICD-9]). We also compared the performance of retrieving smoking strength information (ie, heavy/light smoker) from narrative text and PPI. MATERIALS AND METHODS Our study leveraged an existing lung cancer cohort for smoking status, amount, and strength information, which was manually chart-reviewed. On the NLP side, smoking-related electronic medical record (EMR) data were retrieved first. A pattern-based smoking information extraction module was then implemented to extract smoking-related information. After that, heuristic rules were used to obtain smoking status-related information. Smoking information was also obtained from structured data sources based on diagnosis codes and PPI. Sensitivity, specificity, and accuracy were measured using patients with coverage (ie, the proportion of patients whose smoking status/strength can be effectively determined). RESULTS NLP alone has the best overall performance for smoking status extraction (patient coverage: 0.88; sensitivity: 0.97; specificity: 0.70; accuracy: 0.88); combining PPI with NLP further improved patient coverage to 0.96. ICD-9 does not provide additional improvement to NLP and its combination with PPI. For smoking strength, combining NLP with PPI has slight improvement over NLP alone. CONCLUSION These findings suggest that narrative text could serve as a more reliable and comprehensive source for obtaining smoking-related information than structured data sources. PPI, the readily available structured data, could be used as a complementary source for more comprehensive patient coverage. PMID:27980387
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Hongxing; Li, Min
2018-04-01
Vertical total electron content (VTEC) parameters estimated using global navigation satellite system (GNSS) data are of great interest for ionosphere sensing. Satellite differential code biases (SDCBs) account for one source of error which, if left uncorrected, can deteriorate performance of positioning, timing and other applications. The customary approach to estimate VTEC along with SDCBs from dual-frequency GNSS data, hereinafter referred to as DF approach, consists of two sequential steps. The first step seeks to retrieve ionospheric observables through the carrier-to-code leveling technique. This observable, related to the slant total electron content (STEC) along the satellite-receiver line-of-sight, is biased also by the SDCBs and the receiver differential code biases (RDCBs). By means of thin-layer ionospheric model, in the second step one is able to isolate the VTEC, the SDCBs and the RDCBs from the ionospheric observables. In this work, we present a single-frequency (SF) approach, enabling the joint estimation of VTEC and SDCBs using low-cost receivers; this approach is also based on two steps and it differs from the DF approach only in the first step, where we turn to the precise point positioning technique to retrieve from the single-frequency GNSS data the ionospheric observables, interpreted as the combination of the STEC, the SDCBs and the biased receiver clocks at the pivot epoch. Our numerical analyses clarify how SF approach performs when being applied to GPS L1 data collected by a single receiver under both calm and disturbed ionospheric conditions. The daily time series of zenith VTEC estimates has an accuracy ranging from a few tenths of a TEC unit (TECU) to approximately 2 TECU. For 73-96% of GPS satellites in view, the daily estimates of SDCBs do not deviate, in absolute value, more than 1 ns from their ground truth values published by the Centre for Orbit Determination in Europe.
NASA Astrophysics Data System (ADS)
Li, Husheng; Betz, Sharon M.; Poor, H. Vincent
2007-05-01
This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.
Hatch, R J; Jennings, E A; Ivanusic, J J
2013-08-01
Hyperpolarization-activated cyclic nucleotide-gated (HCN) channels conduct an inward cation current (Ih ) that contributes to the maintenance of neuronal membrane potential and have been implicated in a number of animal models of neuropathic and inflammatory pain. In the current study, we investigated HCN channel involvement in inflammatory pain of the temporomandibular joint (TMJ). The contribution of HCN channels to inflammation (complete Freund's adjuvant; CFA)-induced mechanical hypersensitivity of the rat TMJ was tested with injections of the HCN channel blocker ZD7288. Retrograde labelling and immunohistochemistry was used to explore HCN channel expression in sensory neurons that innervate the TMJ. Injection of CFA into the TMJ (n = 7) resulted in a significantly increased mechanical sensitivity relative to vehicle injection (n = 7) (p < 0.05). The mechanical hypersensitivity generated by CFA injection was blocked by co-injection of ZD7288 with the CFA (n = 7). Retrograde labelling and immunohistochemistry experiments revealed expression predominantly of HCN1 and HCN2 channel subunits in trigeminal ganglion neurons that innervate the TMJ (n = 3). No change in the proportion or intensity of HCN channel expression was found in inflamed (n = 6) versus control (n = 5) animals at the time point tested. Our findings suggest a role for peripheral HCN channels in inflammation-induced pain of the TMJ. Peripheral application of a HCN channel blocker could provide therapeutic benefit for inflammatory TMJ pain and avoid side effects associated with activation of HCN channels in the central nervous system. © 2012 European Federation of International Association for the Study of Pain Chapters.
JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.
Reacting Flow in the Entrance to a Channel with Surface and Gas-Phase Kinetics
NASA Astrophysics Data System (ADS)
Mikolaitis, David; Griffen, Patrick
2006-11-01
In many catalytic reactors the conversion process is most intense at the very beginning of the channel where the flow is not yet fully developed; hence there will be important interactions between the developing flow field and reaction. To study this problem we have written an object-oriented code for the analysis of reacting flow in the entrance of a channel where both surface reaction and gas-phase reaction are modeled with detailed kinetics. Fluid mechanical momentum and energy equations are modeled by parabolic ``boundary layer''-type equations where streamwise gradient terms are small and the pressure is constant in the transverse direction. Transport properties are modeled with mixture-averaging and the chemical kinetic sources terms are evaluated using Cantera. Numerical integration is done with Matlab using the function pdepe. Calculations were completed using mixtures of methane and air flowing through a channel with platinum walls held at a fixed temperature. GRI-Mech 3.0 was used to describe the gas-phase chemistry and Deutchmann's methane-air-platinum model was used for the surface chemistry. Ignition in the gas phase is predicted for high enough wall temperatures. A hot spot forms away from the walls just before ignition that is fed by radicals produced at the surface.
NASA Astrophysics Data System (ADS)
Azadegan, B.
2013-03-01
The presented Mathematica code is an efficient tool for simulation of planar channeling radiation spectra of relativistic electrons channeled along major crystallographic planes of a diamond-structure single crystal. The program is based on the quantum theory of channeling radiation which has been successfully applied to study planar channeling at electron energies between 10 and 100 MeV. Continuum potentials for different planes of diamond, silicon and germanium single crystals are calculated using the Doyle-Turner approximation to the atomic scattering factor and taking thermal vibrations of the crystal atoms into account. Numerical methods are applied to solve the one-dimensional Schrödinger equation. The code is designed to calculate the electron wave functions, transverse electron states in the planar continuum potential, transition energies, line widths of channeling radiation and depth dependencies of the population of quantum states. Finally the spectral distribution of spontaneously emitted channeling radiation is obtained. The simulation of radiation spectra considerably facilitates the interpretation of experimental data. Catalog identifier: AEOH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 446 No. of bytes in distributed program, including test data, etc.: 209805 Distribution format: tar.gz Programming language: Mathematica. Computer: Platforms on which Mathematica is available. Operating system: Operating systems on which Mathematica is available. RAM: 1 MB Classification: 7.10. Nature of problem: Planar channeling radiation is emitted by relativistic charged particles during traversing a single crystal in direction parallel to a crystallographic plane. Channeling is modeled as the motion of charged particles in a continuous planar potential which is formed by the spatially and thermally averaged action of the individual electrostatic potentials of the crystal atoms of the corresponding plane. Classically, the motion of channeled particles through the crystal resembles transverse oscillations being the source of radiation emission. For electrons of energy less than 100 MeV considered here, planar channeling has to be treated quantum mechanically by a one-dimensional Schrödinger equation for the transverse motion. Hence, this motion of the channeled electrons is restricted to a number of discrete (bound) channeling states in the planar continuum potential, and the emission of channeling radiation is caused by spontaneous electron transitions between these eigenstates. Due to relativistic and Doppler effects, the energy of the emitted photons directed into a narrow forward cone is typically shifted up by about three to five orders of magnitude. Consequently, the observed energy spectrum of channeling radiation is characterized by a number of radiation lines in the energy domain of hard X-rays. Channeling radiation may, therefore, be applied as an intense, tunable, quasi-monochromatic X-ray source. Solution method: The problem consists in finding the electron wave function for the planar continuum potential. Both the wave functions and corresponding energies of channeling states solve the Schrödinger equation of transverse electron motion. In the framework of the so-called many-beam formalism, solving the Schrödinger equation reduces to a eigenvector-eigenvalue problem of a Hermitian matrix. For that the program employs the mathematical tools allocated in the commercial computation software Mathematica. The electric field of the atomic planes in the crystal forces dipole oscillations of the channeled charged particles. In the quantum mechanical approach, the dipole approximation is also valid for spontaneous transitions between bound states. The transition strength for dedicated states depends on the magnitude of the corresponding dipole matrix element. The photon energy correlates with the particle energy, and the spectral width of radiation lines is a function of the life times of the channeling states. Running time: The program has been tested on a PC AMD Athlon X2 245 processor 2.9 GHz with 2 GB RAM. Depending on electron energy and crystal thickness, the running time of the program amounts to 5-10 min.
Movement coordination patterns between the foot joints during walking.
Arnold, John B; Caravaggi, Paolo; Fraysse, François; Thewlis, Dominic; Leardini, Alberto
2017-01-01
In 3D gait analysis, kinematics of the foot joints are usually reported via isolated time histories of joint rotations and no information is provided on the relationship between rotations at different joints. The aim of this study was to identify movement coordination patterns in the foot during walking by expanding an existing vector coding technique according to an established multi-segment foot and ankle model. A graphical representation is also described to summarise the coordination patterns of joint rotations across multiple patients. Three-dimensional multi-segment foot kinematics were recorded in 13 adults during walking. A modified vector coding technique was used to identify coordination patterns between foot joints involving calcaneus, midfoot, metatarsus and hallux segments. According to the type and direction of joints rotations, these were classified as in-phase (same direction), anti-phase (opposite directions), proximal or distal joint dominant. In early stance, 51 to 75% of walking trials showed proximal-phase coordination between foot joints comprising the calcaneus, midfoot and metatarsus. In-phase coordination was more prominent in late stance, reflecting synergy in the simultaneous inversion occurring at multiple foot joints. Conversely, a distal-phase coordination pattern was identified for sagittal plane motion of the ankle relative to the midtarsal joint, highlighting the critical role of arch shortening to locomotor function in push-off. This study has identified coordination patterns between movement of the calcaneus, midfoot, metatarsus and hallux by expanding an existing vector cording technique for assessing and classifying coordination patterns of foot joints rotations during walking. This approach provides a different perspective in the analysis of multi-segment foot kinematics, and may be used for the objective quantification of the alterations in foot joint coordination patterns due to lower limb pathologies or following injuries.
NASA Technical Reports Server (NTRS)
Couvillon, L. A., Jr.; Carl, C.; Goldstein, R. M.; Posner, E. C.; Green, R. R. (Inventor)
1973-01-01
A method and apparatus are described for synchronizing a received PCM communications signal without requiring a separate synchronizing channel. The technique provides digital correlation of the received signal with a reference signal, first with its unmodulated subcarrier and then with a bit sync code modulated subcarrier, where the code sequence length is equal in duration to each data bit.
Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.
2008-01-01
The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.
WWER-1000 core and reflector parameters investigation in the LR-0 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.
2006-07-01
Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less
Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues
NASA Astrophysics Data System (ADS)
Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.
Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.
Iliyasu, U; Ibrahim, Y V; Umar, Sadiq; Agbo, S A; Jibrin, Y
2017-05-01
Investigation of reactivity variation due to flooding of the irradiation channels of Nigeria Research Reactor (NIRR-1) a low power miniature neutron source reactor (MNSR) located at the Centre for Energy Research and Training, Ahmadu Bello University, Zaria Nigeria using the MCNP code for High Enrich Uranium (HEU) and Low Enrich Uranium (LEU) core has been simulated in this present study. In this work, the excess reactivity worth of flooding HEU core for 1 inner, 2 inner, 3 inner, 4 inner and all inner are 0.318mk, 0.577mk, 0.318mk, 1.204mk and 1.503mk respectively, and outer irradiation channels are 0.119mk, 0.169mk, 0.348mk, 0.438mk and 0.418mk respectively, the highest excess reactivity result from flooding both inner and outer irradiation channels is 2.04mk (±1.72×10 -7 ), the excess reactivity for LEU core was 0.299mk, 0.568mk, 0.896mk, 1.195mk and 1.524mk in the inner irradiation channels, and the outer irradiation channels are 0.129mk, 0.189mk, 0.219mk, 0.269mk and 0.548mk where the highest excess reactivity was 1.942mk (±1.64×10 -7 ) resulting from flooding inner and outer irradiation channels. The reactivity induced by flooding of the irradiation channels of NIRR-1 with water is within design safety limit enshrined in Safety Analysis Report of NIRR-1. The results also compare well with literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Carrier recovery techniques on satellite mobile channels
NASA Technical Reports Server (NTRS)
Vucetic, B.; Du, J.
1990-01-01
An analytical method and a stored channel model were used to evaluate error performance of uncoded quadrature phase shift keying (QPSK) and M-ary phase shift keying (MPSK) trellis coded modulation (TCM) over shadowed satellite mobile channels in the presence of phase jitter for various carrier recovery techniques.
Robust multiparty quantum secret key sharing over two collective-noise channels
NASA Astrophysics Data System (ADS)
Zhang, Zhan-jun
2006-02-01
Based on a polarization-based quantum key distribution protocol over a collective-noise channel [Phys. Rev. Lett. 92 (2004) 017901], a robust (n,n)-threshold scheme of multiparty quantum secret sharing of key over two collective-noise channels (i.e., the collective dephasing channel and the collective rotating channel) is proposed. In this scheme the sharer entirety can establish a joint key with the message sender only if all the sharers collaborate together. Since Bell singlets are enough for use and only single-photon polarization needs to be identified, this scheme is feasible according to the present-day technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheat, Robert; Marksteiner, Quinn; Quenzer, Jonathan
2012-03-26
This labview code is used to set the phase and amplitudes on the 72 antenna of the superluminal machine, and to map out the radiation patter from the superluminal antenna.Each antenna radiates a modulated signal consisting of two separate frequencies, in the range of 2 GHz to 2.8 GHz. The phases and amplitudes from each antenna are controlled by a pair of AD8349 vector modulators (VMs). These VMs set the phase and amplitude of a high frequency signal using a set of four DC inputs, which are controlled by Linear Technologies LTC1990 digital to analog converters (DACs). The labview codemore » controls these DACs through an 8051 microcontroller.This code also monitors the phases and amplitudes of the 72 channels. Near each antenna, there is a coupler that channels a portion of the power into a binary network. Through a labview controlled switching array, any of the 72 coupled signals can be channeled in to the Tektronix TDS 7404 digital oscilloscope. Then the labview code takes an FFT of the signal, and compares it to the FFT of a reference signal in the oscilloscope to determine the magnitude and phase of each sideband of the signal. The code compensates for phase and amplitude errors introduced by differences in cable lengths.The labview code sets each of the 72 elements to a user determined phase and amplitude. For each element, the code runs an iterative procedure, where it adjusts the DACs until the correct phases and amplitudes have been reached.« less