Sample records for practical bit rate

  1. Long-distance entanglement-based quantum key distribution experiment using practical detectors.

    PubMed

    Takesue, Hiroki; Harada, Ken-Ichi; Tamaki, Kiyoshi; Fukuda, Hiroshi; Tsuchizawa, Tai; Watanabe, Toshifumi; Yamada, Koji; Itabashi, Sei-Ichi

    2010-08-02

    We report an entanglement-based quantum key distribution experiment that we performed over 100 km of optical fiber using a practical source and detectors. We used a silicon-based photon-pair source that generated high-purity time-bin entangled photons, and high-speed single photon detectors based on InGaAs/InP avalanche photodiodes with the sinusoidal gating technique. To calculate the secure key rate, we employed a security proof that validated the use of practical detectors. As a result, we confirmed the successful generation of sifted keys over 100 km of optical fiber with a key rate of 4.8 bit/s and an error rate of 9.1%, with which we can distill secure keys with a key rate of 0.15 bit/s.

  2. Region-of-interest determination and bit-rate conversion for H.264 video transcoding

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan

    2013-12-01

    This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.

  3. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  4. Conditions for the optical wireless links bit error ratio determination

    NASA Astrophysics Data System (ADS)

    Kvíčala, Radek

    2017-11-01

    To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.

  5. Efficient bit sifting scheme of post-processing in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  6. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  7. Low bit-rate image compression via adaptive down-sampling and constrained least squares upconversion.

    PubMed

    Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan

    2009-03-01

    Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

  8. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  9. Adaptive 84.44-190 Mbit/s phosphor-LED wireless communication utilizing no blue filter at practical transmission distance.

    PubMed

    Yeh, C H; Chow, C W; Chen, H Y; Chen, J; Liu, Y L

    2014-04-21

    We propose and experimentally demonstrate a white-light phosphor-LED visible light communication (VLC) system with an adaptive 84.44 to 190 Mbit/s 16 quadrature-amplitude-modulation (QAM) orthogonal-frequency-division-multiplexing (OFDM) signal utilizing bit-loading method. Here, the optimal analogy pre-equalization design is performed at LED transmitter (Tx) side and no blue filter is used at the Rx side. Hence, the ~1 MHz modulation bandwidth of phosphor-LED could be extended to 30 MHz. In addition, the measured bit error rates (BERs) of < 3.8 × 10(-3) [forward error correction (FEC) threshold] at different measured data rates can be achieved at practical transmission distances of 0.75 to 2 m.

  10. 640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser

    NASA Astrophysics Data System (ADS)

    Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei

    2017-04-01

    An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.

  11. Quantum and classical noise in practical quantum-cryptography systems based on polarization-entangled photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelletto, S.; Degiovanni, I.P.; Rastello, M.L.

    2003-02-01

    Quantum-cryptography key distribution (QCKD) experiments have been recently reported using polarization-entangled photons. However, in any practical realization, quantum systems suffer from either unwanted or induced interactions with the environment and the quantum measurement system, showing up as quantum and, ultimately, statistical noise. In this paper, we investigate how an ideal polarization entanglement in spontaneous parametric down-conversion (SPDC) suffers quantum noise in its practical implementation as a secure quantum system, yielding errors in the transmitted bit sequence. Since all SPDC-based QCKD schemes rely on the measurement of coincidence to assert the bit transmission between the two parties, we bundle up themore » overall quantum and statistical noise in an exhaustive model to calculate the accidental coincidences. This model predicts the quantum-bit error rate and the sifted key and allows comparisons between different security criteria of the hitherto proposed QCKD protocols, resulting in an objective assessment of performances and advantages of different systems.« less

  12. An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TerraTek

    2007-06-30

    A deep drilling research program titled 'An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration' was conducted at TerraTek's Drilling and Completions Laboratory. Drilling tests were run to simulate deep drilling by using high bore pressures and high confining and overburden stresses. The purpose of this testing was to gain insight into practices that would improve rates of penetration and mechanical specific energy while drilling under high pressure conditions. Thirty-seven test series were run utilizing a variety of drilling parameters which allowed analysis of the performance ofmore » drill bits and drilling fluids. Five different drill bit types or styles were tested: four-bladed polycrystalline diamond compact (PDC), 7-bladed PDC in regular and long profile, roller-cone, and impregnated. There were three different rock types used to simulate deep formations: Mancos shale, Carthage marble, and Crab Orchard sandstone. The testing also analyzed various drilling fluids and the extent to which they improved drilling. The PDC drill bits provided the best performance overall. The impregnated and tungsten carbide insert roller-cone drill bits performed poorly under the conditions chosen. The cesium formate drilling fluid outperformed all other drilling muds when drilling in the Carthage marble and Mancos shale with PDC drill bits. The oil base drilling fluid with manganese tetroxide weighting material provided the best performance when drilling the Crab Orchard sandstone.« less

  13. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  14. Next generation PET data acquisition architectures

    NASA Astrophysics Data System (ADS)

    Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.

    1997-06-01

    New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.

  15. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  16. A Hierarchical Modulation Coherent Communication Scheme for Simultaneous Four-State Continuous-Variable Quantum Key Distribution and Classical Communication

    NASA Astrophysics Data System (ADS)

    Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang

    2018-06-01

    We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.

  17. Counter-Rotating Tandem Motor Drilling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent Perry

    2009-04-30

    Gas Technology Institute (GTI), in partnership with Dennis Tool Company (DTC), has worked to develop an advanced drill bit system to be used with microhole drilling assemblies. One of the main objectives of this project was to utilize new and existing coiled tubing and slimhole drilling technologies to develop Microhole Technology (MHT) so as to make significant reductions in the cost of E&P down to 5000 feet in wellbores as small as 3.5 inches in diameter. This new technology was developed to work toward the DOE's goal of enabling domestic shallow oil and gas wells to be drilled inexpensively comparedmore » to wells drilled utilizing conventional drilling practices. Overall drilling costs can be lowered by drilling a well as quickly as possible. For this reason, a high drilling rate of penetration is always desired. In general, high drilling rates of penetration (ROP) can be achieved by increasing the weight on bit and increasing the rotary speed of the bit. As the weight on bit is increased, the cutting inserts penetrate deeper into the rock, resulting in a deeper depth of cut. As the depth of cut increases, the amount of torque required to turn the bit also increases. The Counter-Rotating Tandem Motor Drilling System (CRTMDS) was planned to achieve high rate of penetration (ROP) resulting in the reduction of the drilling cost. The system includes two counter-rotating cutter systems to reduce or eliminate the reactive torque the drillpipe or coiled tubing must resist. This would allow the application of maximum weight-on-bit and rotational velocities that a coiled tubing drilling unit is capable of delivering. Several variations of the CRTDMS were designed, manufactured and tested. The original tests failed leading to design modifications. Two versions of the modified system were tested and showed that the concept is both positive and practical; however, the tests showed that for the system to be robust and durable, borehole diameter should be substantially larger than that of slim holes. As a result, the research team decided to complete the project, document the tested designs and seek further support for the concept outside of the DOE.« less

  18. High-speed phosphor-LED wireless communication system utilizing no blue filter

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Chen, H. Y.; Chen, J.; Liu, Y. L.; Wu, Y. F.

    2014-09-01

    In this paper, we propose and investigate an adaptively 84.44 to 190 Mb/s phosphor-LED visible light communication (VLC) system at a practical transmission distance. Here, we utilize the orthogonal-frequency-division-multiplexing quadrature-amplitude-modulation (OFDM-QAM) modulation with power/bit-loading algorithm in proposed VLC system. In the experiment, the optimal analogy pre-equalization design is also performed at LED-Tx side and no blue filter is used at the Rx side for extending the modulation bandwidth from 1 MHz to 30 MHz. In addition, the corresponding free space transmission lengths are between 75 cm and 2 m under various data rates of proposed VLC. And the measured bit error rates (BERs) of < 3.8×10-3 [forward error correction (FEC) limit] at different transmission lengths and measured data rates can be also obtained. Finally, we believe that our proposed scheme could be another alternative VLC implementation in practical distance, supporting < 100 Mb/s, using commercially available LED and PD (without optical blue filtering) and compact size.

  19. Practical quantum key distribution protocol without monitoring signal disturbance.

    PubMed

    Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato

    2014-05-22

    Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.

  20. A comparison of orthogonal transformations for digital speech processing.

    NASA Technical Reports Server (NTRS)

    Campanella, S. J.; Robinson, G. S.

    1971-01-01

    Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.

  1. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  2. 100 km differential phase shift quantum key distribution experiment with low jitter up-conversion detectors

    NASA Astrophysics Data System (ADS)

    Diamanti, Eleni; Takesue, Hiroki; Langrock, Carsten; Fejer, M. M.; Yamamoto, Yoshihisa

    2006-12-01

    We present a quantum key distribution experiment in which keys that were secure against all individual eavesdropping attacks allowed by quantum mechanics were distributed over 100 km of optical fiber. We implemented the differential phase shift quantum key distribution protocol and used low timing jitter 1.55 µm single-photon detectors based on frequency up-conversion in periodically poled lithium niobate waveguides and silicon avalanche photodiodes. Based on the security analysis of the protocol against general individual attacks, we generated secure keys at a practical rate of 166 bit/s over 100 km of fiber. The use of the low jitter detectors also increased the sifted key generation rate to 2 Mbit/s over 10 km of fiber.

  3. BitCoin meets Google Trends and Wikipedia: Quantifying the relationship between phenomena of the Internet era

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2013-12-01

    Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies - BitCoin - have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years - digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia - and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.

  4. BitCoin meets Google Trends and Wikipedia: quantifying the relationship between phenomena of the Internet era.

    PubMed

    Kristoufek, Ladislav

    2013-12-04

    Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.

  5. FPGA based digital phase-coding quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu

    2015-12-01

    Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.

  6. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  7. A Parametric Study for the Design of an Optimized Ultrasonic Percussive Planetary Drill Tool.

    PubMed

    Li, Xuan; Harkness, Patrick; Worrall, Kevin; Timoney, Ryan; Lucas, Margaret

    2017-03-01

    Traditional rotary drilling for planetary rock sampling, in situ analysis, and sample return are challenging because the axial force and holding torque requirements are not necessarily compatible with lightweight spacecraft architectures in low-gravity environments. This paper seeks to optimize an ultrasonic percussive drill tool to achieve rock penetration with lower reacted force requirements, with a strategic view toward building an ultrasonic planetary core drill (UPCD) device. The UPCD is a descendant of the ultrasonic/sonic driller/corer technique. In these concepts, a transducer and horn (typically resonant at around 20 kHz) are used to excite a toroidal free mass that oscillates chaotically between the horn tip and drill base at lower frequencies (generally between 10 Hz and 1 kHz). This creates a series of stress pulses that is transferred through the drill bit to the rock surface, and while the stress at the drill-bit tip/rock interface exceeds the compressive strength of the rock, it causes fractures that result in fragmentation of the rock. This facilitates augering and downward progress. In order to ensure that the drill-bit tip delivers the greatest effective impulse (the time integral of the drill-bit tip/rock pressure curve exceeding the strength of the rock), parameters such as the spring rates and the mass of the free mass, the drill bit and transducer have been varied and compared in both computer simulation and practical experiment. The most interesting findings and those of particular relevance to deep drilling indicate that increasing the mass of the drill bit has a limited (or even positive) influence on the rate of effective impulse delivered.

  8. Link performance optimization for digital satellite broadcasting systems

    NASA Astrophysics Data System (ADS)

    de Gaudenzi, R.; Elia, C.; Viola, R.

    The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.

  9. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  10. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  11. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  12. BitCoin meets Google Trends and Wikipedia: Quantifying the relationship between phenomena of the Internet era

    PubMed Central

    Kristoufek, Ladislav

    2013-01-01

    Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies – BitCoin – have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years – digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia – and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value. PMID:24301322

  13. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  14. Performance Analysis of a JTIDS/Link-16-type Waveform Transmitted over Slow, Flat Nakagami Fading Channels in the Presence of Narrowband Interference

    DTIC Science & Technology

    2008-12-01

    The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps

  15. Computer modeling and design analysis of a bit rate discrimination circuit based dual-rate burst mode receiver

    NASA Astrophysics Data System (ADS)

    Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight

    2011-01-01

    In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.

  16. Bit-rate transparent DPSK demodulation scheme based on injection locking FP-LD

    NASA Astrophysics Data System (ADS)

    Feng, Hanlin; Xiao, Shilin; Yi, Lilin; Zhou, Zhao; Yang, Pei; Shi, Jie

    2013-05-01

    We propose and demonstrate a bit-rate transparent differential phase shift-keying (DPSK) demodulation scheme based on injection locking multiple-quantum-well (MQW) strained InGaAsP FP-LD. By utilizing frequency deviation generated by phase modulation and unstable injection locking state with Fabry-Perot laser diode (FP-LD), DPSK to polarization shift-keying (PolSK) and PolSK to intensity modulation (IM) format conversions are realized. We analyze bit error rate (BER) performance of this demodulation scheme. Experimental results show that different longitude modes, bit rates and seeding power have influences on demodulation performance. We achieve error free DPSK signal demodulation under various bit rates of 10 Gbit/s, 5 Gbit/s, 2.5 Gbit/s and 1.25 Gbit/s with the same demodulation setting.

  17. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    NASA Astrophysics Data System (ADS)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  18. Efficient and robust quantum random number generation by photon number detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.

    2015-08-17

    We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less

  19. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  20. New stimulation pattern design to improve P300-based matrix speller performance at high flash rate

    NASA Astrophysics Data System (ADS)

    Polprasert, Chantri; Kukieattikool, Pratana; Demeechai, Tanee; Ritcey, James A.; Siwamogsatham, Siwaruk

    2013-06-01

    Objective. We propose a new stimulation pattern design for the P300-based matrix speller aimed at increasing the minimum target-to-target interval (TTI). Approach. Inspired by the simplicity and strong performance of the conventional row-column (RC) stimulation, the proposed stimulation is obtained by modifying the RC stimulation through alternating row and column flashes which are selected based on the proposed design rules. The second flash of the double-flash components is then delayed for a number of flashing instants to increase the minimum TTI. The trade-off inherited in this approach is the reduced randomness within the stimulation pattern. Main results. We test the proposed stimulation pattern and compare its performance in terms of selection accuracy, raw and practical bit rates with the conventional RC flashing paradigm over several flash rates. By increasing the minimum TTI within the stimulation sequence, the proposed stimulation has more event-related potentials that can be identified compared to that of the conventional RC stimulations, as the flash rate increases. This leads to significant performance improvement in terms of the letter selection accuracy, the raw and practical bit rates over the conventional RC stimulation. Significance. These studies demonstrate that significant performance improvement over the RC stimulation is obtained without additional testing or training samples to compensate for low P300 amplitude at high flash rate. We show that our proposed stimulation is more robust to reduced signal strength due to the increased flash rate than the RC stimulation.

  1. Room temperature single-photon detectors for high bit rate quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comandar, L. C.; Patel, K. A.; Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA

    We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.

  2. Field-Deployable Video Cloud Solution

    DTIC Science & Technology

    2016-03-01

    78 2. Shipboard Server or Video Cloud System .......................................79 3. 4G LTE and Wi-Fi...local area network LED light emitting diode Li-ion lithium ion LTE long term evolution Mbps mega-bits per second MBps mega-bytes per second xv...restrictions on distribution. File size is dependent on both bit rate and content length. Bit rate is a value measured in bits per second (bps) and is

  3. A Security Proof of Measurement Device Independent Quantum Key Distribution: From the View of Information Theory

    NASA Astrophysics Data System (ADS)

    Li, Fang-Yi; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Wen, Hao; Zhao, Yi-Bo; Han, Zheng-Fu

    2014-07-01

    Although some ideal quantum key distribution protocols have been proved to be secure, there have been some demonstrations that practical quantum key distribution implementations were hacked due to some real-life imperfections. Among these attacks, detector side channel attacks may be the most serious. Recently, a measurement device independent quantum key distribution protocol [Phys. Rev. Lett. 108 (2012) 130503] was proposed and all detector side channel attacks are removed in this scheme. Here a new security proof based on quantum information theory is given. The eavesdropper's information of the sifted key bits is bounded. Then with this bound, the final secure key bit rate can be obtained.

  4. A high-speed BCI based on code modulation VEP

    NASA Astrophysics Data System (ADS)

    Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai

    2011-04-01

    Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.

  5. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  6. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  7. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.

  8. Meteor burst communications for LPI applications

    NASA Astrophysics Data System (ADS)

    Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.

    A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.

  9. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  10. Purpose-built PDC bit successfully drills 7-in liner equipment and formation: An integrated solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puennel, J.G.A.; Huppertz, A.; Huizing, J.

    1996-12-31

    Historically, drilling out the 7-in, liner equipment has been a time consuming operation with a limited success ratio. The success of the operation is highly dependent on the type of drill bit employed. Tungsten carbide mills and mill tooth rock bits required from 7.5 to 11.5 hours respectively to drill the pack-off bushings, landing collar, shoe track and shoe. Rates of penetration dropped dramatically when drilling the float equipment. While conventional PDC bits have drilled the liner equipment successfully (averaging 9.7 hours), severe bit damage invariably prevented them from continuing to drill the formation at cost-effective penetration rates. This papermore » describes the integrated development and application of an IADC M433 Class PDC bit, which was designed specifically to drill out the 7-in. liner equipment and continue drilling the formation at satisfactory penetration rates. The development was the result of a joint investigation There the operator and bit/liner manufacturers shared their expertise in solving a drilling problem, The heavy-set bit was developed following drill-off tests conducted to investigate the drillability of the 7-in. liner equipment. Key features of the new bit and its application onshore The Netherlands will be presented and analyzed.« less

  11. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  12. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    PubMed

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  13. Bit-error rate for free-space adaptive optics laser communications.

    PubMed

    Tyson, Robert K

    2002-04-01

    An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.

  14. Spread-spectrum multiple access using wideband noncoherent MFSK

    NASA Technical Reports Server (NTRS)

    Ha, Tri T.; Pratt, Timothy; Maggenti, Mark A.

    1987-01-01

    Two spread-spectrum multiple access systems which use wideband M-ary frequency shift keying (FSK) (MFSK) as the primary modulation are presented. A bit error rate performance analysis is presented and system throughput is calculated for sample C band and Ku band satellite systems. Sample link analyses are included to illustrate power and adjacent satellite interference considerations in practical multiple access systems.

  15. High-speed receiver based on waveguide germanium photodetector wire-bonded to 90nm SOI CMOS amplifier.

    PubMed

    Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A

    2012-07-30

    The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.

  16. A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebner, Keith T. K., E-mail: kloebner@stanford.edu; Underwood, Thomas C.; Cappelli, Mark A.

    2015-06-15

    A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenummore » pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.« less

  17. Wear and performance: An experimental study on PDC bits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, O.; Azar, J.J.

    1997-07-01

    Real-time drilling data, gathered under full-scale conditions, was analyzed to determine the influence of cutter dullness on PDC-bit rate of penetration. It was found that while drilling in shale, the cutters` wearflat area was not a controlling factor on rate of penetration; however, when drilling in limestone, wearflat area significantly influenced PDC bit penetration performance. Similarly, the presence of diamond lips on PDC cutters was found to be unimportant while drilling in shale, but it greatly enhanced bit performance when drilling in limestone.

  18. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  19. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  20. Antiwhirl PDC bits increased penetration rates in Alberta drilling. [Polycrystalline Diamond Compact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrosky, D.; Osmak, G.

    1993-07-05

    The antiwhirl PDC bits and an inhibitive mud system contributed to the quicker drilling of the time-sensitive shales. The hole washouts in the intermediate section were dramatically reduced, resulting in better intermediate casing cement jobs. Also, the use of antirotation PDC-drillable cementing plugs eliminated the need to drill out plugs and float equipment with a steel tooth bit and then trip for the PDC bit. By using an antiwhirl PDC bit, at least one trip was eliminated in the intermediate section. Offset data indicated that two to six conventional bits would have been required to drill the intermediate hole interval.more » The PDC bit was rebuildable and therefore rerunnable even after being used on five wells. In each instance, the cost of replacing chipped cutters was less than the cost of a new insert roller cone bit. The paper describes the antiwhirl bits; the development of the bits; and their application in a clastic sequence, a carbonate sequence, and the Shekilie oil field; the improvement in the rate of penetration; the selection of bottom hole assemblies; washout problems; and drill-out characteristics.« less

  1. A Tuned-RF Duty-Cycled Wake-Up Receiver with -90 dBm Sensitivity.

    PubMed

    Bdiri, Sadok; Derbel, Faouzi; Kanoun, Olfa

    2017-12-29

    A novel wake-up receiver for wireless sensor networks is introduced. It operates with a modified medium access protocol (MAC), allowing low-energy consumption and practical latency. The ultra-low-power wake-up receiver operates with enhanced duty-cycled listening. The analysis of energy models of the duty-cycle-based communication is presented. All the WuRx blocks are studied to obey the duty-cycle operation. For a mean interval time for the data exchange cycle between a transmitter and a receiver over 1.7 s and a 64-bit wake-up packet detection latency of 32 ms, the average power consumption of the wake-up receiver (WuRx) reaches down to 3 μ W . It also features scalable addressing of more than 512 bit at a data rate of 128 k bit / s -1 . At a wake-up packet error rate of 10 - 2 , the detection sensitivity reaches a minimum of - 90 dBm . The combination of the MAC protocol and the WuRx eases the adoption of different kinds of wireless sensor networks. In low traffic communication, the WuRx dramatically saves more energy than that of a network that is implementing conventional duty-cycling. In this work, a prototype was realized to evaluate the intended performance.

  2. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  3. Traffic management mechanism for intranets with available-bit-rate access to the Internet

    NASA Astrophysics Data System (ADS)

    Hassan, Mahbub; Sirisena, Harsha R.; Atiquzzaman, Mohammed

    1997-10-01

    The design of a traffic management mechanism for intranets connected to the Internet via an available bit rate access- link is presented. Selection of control parameters for this mechanism for optimum performance is shown through analysis. An estimate for packet loss probability at the access- gateway is derived for random fluctuation of available bit rate of the access-link. Some implementation strategies of this mechanism in the standard intranet protocol stack are also suggested.

  4. Design considerations for a monolithic, GaAs, dual-mode, QPSK/QASK, high-throughput rate transceiver. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kot, R. A.; Oliver, J. D.; Wilson, S. G.

    1984-01-01

    A monolithic, GaAs, dual mode, quadrature amplitude shift keying and quadrature phase shift keying transceiver with one and two billion bits per second data rate is being considered to achieve a low power, small and ultra high speed communication system for satellite as well as terrestrial purposes. Recent GaAs integrated circuit achievements are surveyed and their constituent device types are evaluated. Design considerations, on an elemental level, of the entire modem are further included for monolithic realization with practical fabrication techniques. Numerous device types, with practical monolithic compatability, are used in the design of functional blocks with sufficient performances for realization of the transceiver.

  5. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  6. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  7. Validation of the VitaBit Sit–Stand Tracker: Detecting Sitting, Standing, and Activity Patterns

    PubMed Central

    Plasqui, Guy

    2018-01-01

    Sedentary behavior (SB) has detrimental consequences and cannot be compensated for through moderate-to-vigorous physical activity (PA). In order to understand and mitigate SB, tools for measuring and monitoring SB are essential. While current direct-to-customer wearables focus on PA, the VitaBit validated in this study was developed to focus on SB. It was tested in a laboratory and in a free-living condition, comparing it to direct observation and to a current best-practice device, the ActiGraph, on a minute-by-minute basis. In the laboratory, the VitaBit yielded specificity and negative predictive rates (NPR) of above 91.2% for sitting and standing, while sensitivity and precision ranged from 74.6% to 85.7%. For walking, all performance values exceeded 97.3%. In the free-living condition, the device revealed performance of over 72.6% for sitting with the ActiGraph as criterion. While sensitivity and precision for standing and walking ranged from 48.2% to 68.7%, specificity and NPR exceeded 83.9%. According to the laboratory findings, high performance for sitting, standing, and walking makes the VitaBit eligible for SB monitoring. As the results are not transferrable to daily life activities, a direct observation study in a free-living setting is recommended. PMID:29543766

  8. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effectsmore » of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.« less

  9. PDC bits: What`s needed to meet tomorrow`s challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, T.M.; Sinor, L.A.

    1994-12-31

    When polycrystalline diamond compact (PDC) bits were introduced in the mid-1970s they showed tantalizingly high penetration rates in laboratory drilling tests. Single cutter tests indicated that they had the potential to drill very hard rocks. Unfortunately, 20 years later we`re still striving to reach the potential that these bits seem to have. Many problems have been overcome, and PDC bits have offered capabilities not possible with roller cone bits. PDC bits provide the most economical bit choice in many areas, but their limited durability has hampered their application in many other areas.

  10. Efficient and universal quantum key distribution based on chaos and middleware

    NASA Astrophysics Data System (ADS)

    Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun

    2017-01-01

    Quantum key distribution (QKD) promises unconditionally secure communications, however, the low bit rate of QKD cannot meet the requirements of high-speed applications. Despite the many solutions that have been proposed in recent years, they are neither efficient to generate the secret keys nor compatible with other QKD systems. This paper, based on chaotic cryptography and middleware technology, proposes an efficient and universal QKD protocol that can be directly deployed on top of any existing QKD system without modifying the underlying QKD protocol and optical platform. It initially takes the bit string generated by the QKD system as input, periodically updates the chaotic system, and efficiently outputs the bit sequences. Theoretical analysis and simulation results demonstrate that our protocol can efficiently increase the bit rate of the QKD system as well as securely generate bit sequences with perfect statistical properties. Compared with the existing methods, our protocol is more efficient and universal, it can be rapidly deployed on the QKD system to increase the bit rate when the QKD system becomes the bottleneck of its communication system.

  11. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less

  12. Performance of a web-based, realtime, tele-ultrasound consultation system over high-speed commercial telecommunication lines.

    PubMed

    Yoo, Sun K; Kim, D K; Jung, S M; Kim, E-K; Lim, J S; Kim, J H

    2004-01-01

    A Web-based, realtime, tele-ultrasound consultation system was designed. The system employed ActiveX control, MPEG-4 coding of full-resolution ultrasound video (640 x 480 pixels at 30 frames/s) and H.320 videoconferencing. It could be used via a Web browser. The system was evaluated over three types of commercial line: a cable connection, ADSL and VDSL. Three radiologists assessed the quality of compressed and uncompressed ultrasound video-sequences from 16 cases (10 abnormal livers, four abnormal kidneys and two abnormal gallbladders). The radiologists' scores showed that, at a given frame rate, increasing the bit rate was associated with increasing quality; however, at a certain threshold bit rate the quality did not increase significantly. The peak signal to noise ratio (PSNR) was also measured between the compressed and uncompressed images. In most cases, the PSNR increased as the bit rate increased, and increased as the number of dropped frames increased. There was a threshold bit rate, at a given frame rate, at which the PSNR did not improve significantly. Taking into account both sets of threshold values, a bit rate of more than 0.6 Mbit/s, at 30 frames/s, is suggested as the threshold for the maintenance of diagnostic image quality.

  13. Ka-Band, Multi-Gigabit-Per-Second Transceiver

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Wintucky, Edwin G.; Smith, Francis J.; Harris, Johnny M.; Landon, David G.; Haddadin, Osama S.; McIntire, William K.; Sun, June Y.

    2011-01-01

    A document discusses a multi-Gigabit-per-second, Ka-band transceiver with a software-defined modem (SDM) capable of digitally encoding/decoding data and compensating for linear and nonlinear distortions in the end-to-end system, including the traveling-wave tube amplifier (TWTA). This innovation can increase data rates of space-to-ground communication links, and has potential application to NASA s future spacebased Earth observation system. The SDM incorporates an extended version of the industry-standard DVB-S2, and LDPC rate 9/10 FEC codec. The SDM supports a suite of waveforms, including QPSK, 8-PSK, 16-APSK, 32- APSK, 64-APSK, and 128-QAM. The Ka-band and TWTA deliver an output power on the order of 200 W with efficiency greater than 60%, and a passband of at least 3 GHz. The modem and the TWTA together enable a data rate of 20 Gbps with a low bit error rate (BER). The payload data rates for spacecraft in NASA s integrated space communications network can be increased by an order of magnitude (>10 ) over current state-of-practice. This innovation enhances the data rate by using bandwidth-efficient modulation techniques, which transmit a higher number of bits per Hertz of bandwidth than the currently used quadrature phase shift keying (QPSK) waveforms.

  14. Ultra fast quantum key distribution over a 97 km installed telecom fiber with wavelength division multiplexing clock synchronization.

    PubMed

    Tanaka, Akihiro; Fujiwara, Mikio; Nam, Sae W; Nambu, Yoshihiro; Takahashi, Seigo; Maeda, Wakako; Yoshino, Ken-ichiro; Miki, Shigehito; Baek, Burm; Wang, Zhen; Tajima, Akio; Sasaki, Masahide; Tomita, Akihisa

    2008-07-21

    We demonstrated ultra fast BB84 quantum key distribution (QKD) transmission at 625 MHz clock rate through a 97 km field-installed fiber using practical clock synchronization based on wavelength-division multiplexing (WDM). We succeeded in over-one-hour stable key generation at a high sifted key rate of 2.4 kbps and a low quantum bit error rate (QBER) of 2.9%. The asymptotic secure key rate was estimated to be 0.78- 0.82 kbps from the transmission data with the decoy method of average photon numbers 0, 0.15, and 0.4 photons/pulse.

  15. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  16. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  17. Modulation and synchronization technique for MF-TDMA system

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Sayegh, Soheil

    1994-01-01

    This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.

  18. Four channel Laser Firing Unit using laser diodes

    NASA Technical Reports Server (NTRS)

    Rosner, David, Sr.; Spomer, Edwin, Sr.

    1994-01-01

    This paper describes the accomplishments and status of PS/EDD's (Pacific Scientific/Energy Dynamics Division) internal research and development effort to prototype and demonstrate a practical four channel laser firing unit (LFU) that uses laser diodes to initiate pyrotechnic events. The LFU individually initiates four ordnance devices using the energy from four diode lasers carried over the fiber optics. The LFU demonstrates end-to-end optical built in test (BIT) capabilities. Both Single Fiber Reflective BIT and Dual Fiber Reflective BIT approaches are discussed and reflection loss data is presented. This paper includes detailed discussions of the advantages and disadvantages of both BIT approaches, all-fire and no-fire levels, and BIT detection levels. The following topics are also addressed: electronic control and BIT circuits, fiber optic sizing and distribution, and an electromechanical shutter type safe/arm device. This paper shows the viability of laser diode initiation systems and single fiber BIT for typing military applications.

  19. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  20. Coherent detection and digital signal processing for fiber optic communications

    NASA Astrophysics Data System (ADS)

    Ip, Ezra

    The drive towards higher spectral efficiency in optical fiber systems has generated renewed interest in coherent detection. We review different detection methods, including noncoherent, differentially coherent, and coherent detection, as well as hybrid detection methods. We compare the modulation methods that are enabled and their respective performances in a linear regime. An important system parameter is the number of degrees of freedom (DOF) utilized in transmission. Polarization-multiplexed quadrature-amplitude modulation maximizes spectral efficiency and power efficiency as it uses all four available DOF contained in the two field quadratures in the two polarizations. Dual-polarization homodyne or heterodyne downconversion are linear processes that can fully recover the received signal field in these four DOF. When downconverted signals are sampled at the Nyquist rate, compensation of transmission impairments can be performed using digital signal processing (DSP). Software based receivers benefit from the robustness of DSP, flexibility in design, and ease of adaptation to time-varying channels. Linear impairments, including chromatic dispersion (CD) and polarization-mode dispersion (PMD), can be compensated quasi-exactly using finite impulse response filters. In practical systems, sampling the received signal at 3/2 times the symbol rate is sufficient to enable an arbitrary amount of CD and PMD to be compensated for a sufficiently long equalizer whose tap length scales linearly with transmission distance. Depending on the transmitted constellation and the target bit error rate, the analog-to-digital converter (ADC) should have around 5 to 6 bits of resolution. Digital coherent receivers are naturally suited for the implementation of feedforward carrier recovery, which has superior linewidth tolerance than phase-locked loops, and does not suffer from feedback delay constraints. Differential bit encoding can be used to prevent catastrophic receiver failure due to cycle slips. In systems where nonlinear effects are concentrated mostly at fiber locations with small accumulated dispersion, nonlinear phase de-rotation is a low-complexity algorithm that can partially mitigate nonlinear effects. For systems with arbitrary dispersion maps, however, backpropagation is the only universal technique that can jointly compensate dispersion and fiber nonlinearity. Backpropagation requires solving the nonlinear Schrodinger equation at the receiver, and has high computational cost. Backpropagation is most effective when dispersion compensation fibers are removed, and when signal processing is performed at three times oversampling. Backpropagation can improve system performance and increase transmission distance. With anticipated advances in analog-to-digital converters and integrated circuit technology, DSP-based coherent receivers at bit rates up to 100 Gb/s should become practical in the near future.

  1. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Multiple speed expandable bit synchronizer

    NASA Technical Reports Server (NTRS)

    Bundinger, J. M.

    1979-01-01

    A multiple speed bit synchronizer was designed for installation in an inertial navigation system data decoder to extract non-return-to-zero level data and clock signal from biphase level data. The circuit automatically senses one of four pre-determined biphase data rates and synchronizes the proper clock rate to the data. Through a simple expansion of the basic design, synchronization of more than four binarily related data rates can be accomplished. The design provides an easily adaptable, low cost, low power alternative to external bit synchronizers with additional savings in size and weight.

  3. Design, Implementation, and Operational Methodologies for Sub-arcsecond Attitude Determination, Control, and Stabilization of the Super-pressure Balloon-Borne Imaging Telescope (SuperBIT)

    NASA Astrophysics Data System (ADS)

    Javier Romualdez, Luis

    Scientific balloon-borne instrumentation offers an attractive, competitive, and effective alternative to space-borne missions when considering the overall scope, cost, and development timescale required to design and launch scientific instruments. In particular, the balloon-borne environment provides a near-space regime that is suitable for a number of modern astronomical and cosmological experiments, where the atmospheric interference suffered by ground-based instrumentation is negligible at stratospheric altitudes. This work is centered around the analytical strategies and implementation considerations for the attitude determination and control of SuperBIT, a scientific balloon-borne payload capable of meeting the strict sub-arcsecond pointing and image stability requirements demanded by modern cosmological experiments. Broadly speaking, the designed stability specifications of SuperBIT coupled with its observational efficiency, image quality, and accessibility rivals state-of-the-art astronomical observatories such as the Hubble Space Telescope. To this end, this work presents an end-to-end design methodology for precision pointing balloon-borne payloads such as SuperBIT within an analytical yet implementationally grounded context. Simulation models of SuperBIT are analytically derived to aid in pre-assembly trade-off and case studies that are pertinent to the dynamic balloon-borne environment. From these results, state estimation techniques and control methodologies are extensively developed, leveraging the analytical framework of simulation models and design studies. This pre-assembly design phase is physically validated during assembly, integration, and testing through implementation in real-time hardware and software, which bridges the gap between analytical results and practical application. SuperBIT attitude determination and control is demonstrated throughout two engineering test flights that verify pointing and image stability requirements in flight, where the post-flight results close the overall design loop by suggesting practical improvements to pre-design methodologies. Overall, the analytical and practical results presented in this work, though centered around the SuperBIT project, provide generically useful and implementationally viable methodologies for high precision balloon-borne instrumentation, all of which are validated, justified, and improved both theoretically and practically. As such, the continuing development of SuperBIT, built from the work presented in this thesis, strives to further the potential for scientific balloon-borne astronomy in the near future.

  4. Practical scheme for error control using feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  5. Finite-key analysis for the 1-decoy state QKD protocol

    NASA Astrophysics Data System (ADS)

    Rusca, Davide; Boaron, Alberto; Grünenfelder, Fadri; Martin, Anthony; Zbinden, Hugo

    2018-04-01

    It has been shown that in the asymptotic case of infinite-key length, the 2-decoy state Quantum Key Distribution (QKD) protocol outperforms the 1-decoy state protocol. Here, we present a finite-key analysis of the 1-decoy method. Interestingly, we find that for practical block sizes of up to 108 bits, the 1-decoy protocol achieves for almost all experimental settings higher secret key rates than the 2-decoy protocol. Since using only one decoy is also easier to implement, we conclude that it is the best choice for QKD, in most common practical scenarios.

  6. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    PubMed

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  7. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  8. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  9. Compact quantum random number generator based on superluminescent light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Wei, Shihai; Yang, Jie; Fan, Fan; Huang, Wei; Li, Dashuang; Xu, Bingjie

    2017-12-01

    By measuring the amplified spontaneous emission (ASE) noise of the superluminescent light emitting diodes, we propose and realize a quantum random number generator (QRNG) featured with practicability. In the QRNG, after the detection and amplification of the ASE noise, the data acquisition and randomness extraction which is integrated in a field programmable gate array (FPGA) are both implemented in real-time, and the final random bit sequences are delivered to a host computer with a real-time generation rate of 1.2 Gbps. Further, to achieve compactness, all the components of the QRNG are integrated on three independent printed circuit boards with a compact design, and the QRNG is packed in a small enclosure sized 140 mm × 120 mm × 25 mm. The final random bit sequences can pass all the NIST-STS and DIEHARD tests.

  10. Robustness of 40 Gb/s ASK modulation formats in the practical system infrastructure

    NASA Astrophysics Data System (ADS)

    Pincemin, Erwan; Tan, Antoine; Bezard, Aude; Tonello, Alessandro; Wabnitz, Stefano; Ania-Castañòn, Juan-Diego; Turitsyn, Sergei

    2006-12-01

    In this work, we theoretically and experimentally analyzed the resilience of 40 Gb/s amplitude shift keying modulation formats to transmission impairments in standard single-mode fiber lines as well as to optical filtering introduced by the optical add/drop multiplexer cascade. Our study is a pre-requisite to assess the implementation of cost-effective 40 Gb/s modulation technology in next generation high bit-rate robust optical transport networks.

  11. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  12. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  13. A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality

    NASA Astrophysics Data System (ADS)

    Liu, Li; Zhuang, Xinhua

    2009-01-01

    It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.

  14. Effects of size on three-cone bit performance in laboratory drilled shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; DiBona, B.G.; Sandstrom, J.L.

    1982-09-01

    The effects of size on the performance of 3-cone bits were measured during laboratory drilling tests in shale at simulated downhole conditions. Four Reed HP-SM 3-cone bits with diameters of 6 1/2, 7 7/8, 9 1/2 and 11 inches were used to drill Mancos shale with water-based mud. The tests were conducted at constant borehole pressure, two conditions of hydraulic horsepower per square inch of bit area, three conditions of rotary speed and four conditions of weight-on-bit per inch of bit diameter. The resulting penetration rates and torques were measured. Statistical techniques were used to analyze the data.

  15. Space shuttle data handling and communications considerations.

    NASA Technical Reports Server (NTRS)

    Stoker, C. J.; Minor, R. G.

    1971-01-01

    Operational and development flight instrumentation, data handling subsystems and communication requirements of the space shuttle orbiter are discussed. Emphasis is made on data gathering methods, crew display data, computer processing, recording, and telemetry by means of a digital data bus. Also considered are overall communication conceptual system aspects and design features allowing a proper specification of telemetry encoders and instrumentation recorders. An adaptive bit rate concept is proposed to handle the telemetry bit rates which vary with the amount of operational and experimental data to be transmitted. A split-phase encoding technique is proposed for telemetry to cope with the excessive bit jitter and low bit transition density which may affect television performance.

  16. Single photon quantum cryptography.

    PubMed

    Beveratos, Alexios; Brouri, Rosa; Gacoin, Thierry; Villing, André; Poizat, Jean-Philippe; Grangier, Philippe

    2002-10-28

    We report the full implementation of a quantum cryptography protocol using a stream of single photon pulses generated by a stable and efficient source operating at room temperature. The single photon pulses are emitted on demand by a single nitrogen-vacancy color center in a diamond nanocrystal. The quantum bit error rate is less that 4.6% and the secure bit rate is 7700 bits/s. The overall performances of our system reaches a domain where single photons have a measurable advantage over an equivalent system based on attenuated light pulses.

  17. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  18. Counterfactual quantum cryptography based on weak coherent states

    NASA Astrophysics Data System (ADS)

    Yin, Zhen-Qiang; Li, Hong-Wei; Yao, Yao; Zhang, Chun-Mei; Wang, Shuang; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2012-08-01

    In the “counterfactual quantum cryptography” scheme [T.-G. Noh, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.103.230501 103, 230501 (2009)], two legitimate distant peers may share secret-key bits even when the information carriers do not travel in the quantum channel. The security of this protocol with an ideal single-photon source has been proved by Yin [Z.-Q. Yin, H. W. Li, W. Chen, Z. F. Han, and G. C. Guo, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.042335 82, 042335 (2010)]. In this paper, we prove the security of the counterfactual-quantum-cryptography scheme based on a commonly used weak-coherent-laser source by considering a general collective attack. The basic assumption of this proof is that the efficiency and dark-counting rate of a single-photon detector are consistent for any n-photon Fock states. Then through randomizing the phases of the encoding weak coherent states, Eve's ancilla will be transformed into a classical mixture. Finally, the lower bound of the secret-key-bit rate and a performance analysis for the practical implementation are both given.

  19. Attacking a practical quantum-key-distribution system with wavelength-dependent beam-splitter and multiwavelength sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hong-Wei; Zhengzhou Information Science and Technology Institute, Zhengzhou, 450004; Wang, Shuang

    2011-12-15

    It is well known that the unconditional security of quantum-key distribution (QKD) can be guaranteed by quantum mechanics. However, practical QKD systems have some imperfections, which can be controlled by the eavesdropper to attack the secret key. With current experimental technology, a realistic beam splitter, made by fused biconical technology, has a wavelength-dependent property. Based on this fatal security loophole, we propose a wavelength-dependent attacking protocol, which can be applied to all practical QKD systems with passive state modulation. Moreover, we experimentally attack a practical polarization encoding QKD system to obtain all the secret key information at the cost ofmore » only increasing the quantum bit error rate from 1.3 to 1.4%.« less

  20. Best-Practice Criteria for Practical Security of Self-Differencing Avalanche Photodiode Detectors in Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Koehler-Sidki, A.; Dynes, J. F.; Lucamarini, M.; Roberts, G. L.; Sharpe, A. W.; Yuan, Z. L.; Shields, A. J.

    2018-04-01

    Fast-gated avalanche photodiodes (APDs) are the most commonly used single photon detectors for high-bit-rate quantum key distribution (QKD). Their robustness against external attacks is crucial to the overall security of a QKD system, or even an entire QKD network. We investigate the behavior of a gigahertz-gated, self-differencing (In,Ga)As APD under strong illumination, a tactic Eve often uses to bring detectors under her control. Our experiment and modeling reveal that the negative feedback by the photocurrent safeguards the detector from being blinded through reducing its avalanche probability and/or strengthening the capacitive response. Based on this finding, we propose a set of best-practice criteria for designing and operating fast-gated APD detectors to ensure their practical security in QKD.

  1. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  2. Phase-encoded measurement device independent quantum key distribution without a shared reference frame

    NASA Astrophysics Data System (ADS)

    Zhuo-Dan, Zhu; Shang-Hong, Zhao; Chen, Dong; Ying, Sun

    2018-07-01

    In this paper, a phase-encoded measurement device independent quantum key distribution (MDI-QKD) protocol without a shared reference frame is presented, which can generate secure keys between two parties while the quantum channel or interferometer introduces an unknown and slowly time-varying phase. The corresponding secret key rate and single photons bit error rate is analysed, respectively, with single photons source (SPS) and weak coherent source (WCS), taking finite-key analysis into account. The numerical simulations show that the modified phase-encoded MDI-QKD protocol has apparent superiority both in maximal secure transmission distance and key generation rate while possessing the improved robustness and practical security in the high-speed case. Moreover, the rejection of the frame-calibrating part will intrinsically reduce the consumption of resources as well as the potential security flaws of practical MDI-QKD systems.

  3. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  4. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality even in the very best of these systems. In a related part of our talk, we discuss the role of preprocessing and postprocessing subsystems which serve to enhance the performance of an otherwise standard codec. Examples of these (sometimes proprietary) subsystems are automatic face-tracking prior to the coding of a head-and-shoulders scene, and adaptive postfiltering after conventional decoding, to reduce generic classes of artifacts in low bit rate video. The talk concludes with a summary of technology targets and research directions. We discuss targets in terms of four fundamental parameters of coder performance: quality, bit rate, delay and complexity; and we emphasize the need for measuring and maximizing the composite quality of the audiovisual signal. In discussing research directions, we examine progress and opportunities in two fundamental approaches for bit rate reduction: removal of statistical redundancy and reduction of perceptual irrelevancy; we speculate on the value of techniques such as analysis-by-synthesis that have proved to be quite valuable in speech coding, and we examine the prospect of integrating speech and image processing for developing next-generation technology for audiovisual communications.

  6. Development of a Tool Condition Monitoring System for Impregnated Diamond Bits in Rock Drilling Applications

    NASA Astrophysics Data System (ADS)

    Perez, Santiago; Karakus, Murat; Pellet, Frederic

    2017-05-01

    The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.

  7. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).

    PubMed

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.

  8. A novel chaotic stream cipher and its application to palmprint template protection

    NASA Astrophysics Data System (ADS)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  9. Simultaneous classical communication and quantum key distribution using continuous variables*

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  10. New PDC bit optimizes drilling performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besson, A.; Gudulec, P. le; Delwiche, R.

    1996-05-01

    The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less

  11. Hamming and Accumulator Codes Concatenated with MPSK or QAM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel

    2009-01-01

    In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.

  12. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  13. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  14. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  15. Some Processing and Dynamic-Range Issues in Side-Scan Sonar Work

    NASA Astrophysics Data System (ADS)

    Asper, V. L.; Caruthers, J. W.

    2007-05-01

    Often side-scan sonar data are collected in such a way that they afford little opportunity to do more than simply display them as images. These images are often limited in dynamic range and stored only in an 8-bit tiff format of numbers representing less than true intensity values. Furthermore, there is little prior knowledge during a survey of the best range in which to set those eight bits. This can result in clipped strong targets and/or the depth of shadows so that the bits that can be recovered from the image are not fully representative of target or bottom backscatter strengths. Several top-of-the-line sonars do have a means of logging high-bit-rate digital data (sometimes only as an option), but only dedicated specialists pay much attention to such data, if they record them at all. Most users of side-scan sonars are interested only in the images. Discussed in this paper are issues related to storing and processing of high-bit-rate digital data to preserve their integrity for future enhanced, after- the-fact use and ability to recover actual backscatter strengths. This papers discusses issues in the use high-bit- rate, digital side-scan sonar data. This work was supported by the Office of Naval Research, Code 321OA, and the Naval Oceanographic Office, Mine Warfare Program.

  16. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  17. On the Mutual Information of Multi-hop Acoustic Sensors Network in Underwater Wireless Communication

    DTIC Science & Technology

    2014-05-01

    DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. The University of the District of Columbia Computer Science and Informati Briana Lowe Wellman Washington...financial support throughout my Master’s study and research. Also, I would like to acknowledge the Faculty of the Electrical and Computer Engineering...received bits are in error, and then compute the bit-error-rate as the number of bit errors divided by the total number of bits in the transmitted signal

  18. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  19. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  20. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  1. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  2. A compact presentation of DSN array telemetry performance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1982-01-01

    The telemetry performance of an arrayed receiver system, including radio losses, is often given by a family of curves giving bit error rate vs bit SNR, with tracking loop SNR at one receiver held constant along each curve. This study shows how to process this information into a more compact, useful format in which the minimal total signal power and optimal carrier suppression, for a given fixed bit error rate, are plotted vs data rate. Examples for baseband-only combining are given. When appropriate dimensionless variables are used for plotting, receiver arrays with different numbers of antennas and different threshold tracking loop bandwidths look much alike, and a universal curve for optimal carrier suppression emerges.

  3. Practical remarks on the heart rate and saturation measurement methodology

    NASA Astrophysics Data System (ADS)

    Kowal, M.; Kubal, S.; Piotrowski, P.; Staniec, K.

    2017-05-01

    A surface reflection-based method for measuring heart rate and saturation has been introduced as one having a significant advantage over legacy methods in that it lends itself for use in special applications such as those where a person’s mobility is of prime importance (e.g. during a miner’s work) and excluding the use of traditional clips. Then, a complete ATmega1281-based microcontroller platform has been described for performing computational tasks of signal processing and wireless transmission. In the next section remarks have been provided regarding the basic signal processing rules beginning with raw voltage samples of converted optical signals, their acquisition, storage and smoothing. This chapter ends with practical remarks demonstrating an exponential dependence between the minimum measurable heart rate and the readout resolution at different sampling frequencies for different cases of averaging depth (in bits). The following section is devoted strictly to the heart rate and hemoglobin oxygenation (saturation) measurement with the use of the presented platform, referenced to measurements obtained with a stationary certified pulsoxymeter.

  4. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  5. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)

    PubMed Central

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393

  6. Practical quantum private query of blocks based on unbalanced-state Bennett-Brassard-1984 quantum-key-distribution protocol

    NASA Astrophysics Data System (ADS)

    Wei, Chun-Yan; Gao, Fei; Wen, Qiao-Yan; Wang, Tian-Yin

    2014-12-01

    Until now, the only kind of practical quantum private query (QPQ), quantum-key-distribution (QKD)-based QPQ, focuses on the retrieval of a single bit. In fact, meaningful message is generally composed of multiple adjacent bits (i.e., a multi-bit block). To obtain a message from database, the user Alice has to query l times to get each ai. In this condition, the server Bob could gain Alice's privacy once he obtains the address she queried in any of the l queries, since each ai contributes to the message Alice retrieves. Apparently, the longer the retrieved message is, the worse the user privacy becomes. To solve this problem, via an unbalanced-state technique and based on a variant of multi-level BB84 protocol, we present a protocol for QPQ of blocks, which allows the user to retrieve a multi-bit block from database in one query. Our protocol is somewhat like the high-dimension version of the first QKD-based QPQ protocol proposed by Jacobi et al., but some nontrivial modifications are necessary.

  7. Practical quantum private query of blocks based on unbalanced-state Bennett-Brassard-1984 quantum-key-distribution protocol

    PubMed Central

    Wei, Chun-Yan; Gao, Fei; Wen, Qiao-Yan; Wang, Tian-Yin

    2014-01-01

    Until now, the only kind of practical quantum private query (QPQ), quantum-key-distribution (QKD)-based QPQ, focuses on the retrieval of a single bit. In fact, meaningful message is generally composed of multiple adjacent bits (i.e., a multi-bit block). To obtain a message from database, the user Alice has to query l times to get each ai. In this condition, the server Bob could gain Alice's privacy once he obtains the address she queried in any of the l queries, since each ai contributes to the message Alice retrieves. Apparently, the longer the retrieved message is, the worse the user privacy becomes. To solve this problem, via an unbalanced-state technique and based on a variant of multi-level BB84 protocol, we present a protocol for QPQ of blocks, which allows the user to retrieve a multi-bit block from database in one query. Our protocol is somewhat like the high-dimension version of the first QKD-based QPQ protocol proposed by Jacobi et al., but some nontrivial modifications are necessary. PMID:25518810

  8. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  9. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for themore » high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.« less

  10. Dual-Pulse Pulse Position Modulation (DPPM) for Deep-Space Optical Communications: Performance and Practicality Analysis

    NASA Technical Reports Server (NTRS)

    Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel

    2012-01-01

    Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.

  11. PDC Bit Testing at Sandia Reveals Influence of Chatter in Hard-Rock Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAYMOND,DAVID W.

    1999-10-14

    Polycrystalline diamond compact (PDC) bits have yet to be routinely applied to drilling the hard-rock formations characteristic of geothermal reservoirs. Most geothermal production wells are currently drilled with tungsten-carbide-insert roller-cone bits. PDC bits have significantly improved penetration rates and bit life beyond roller-cone bits in the oil and gas industry where soft to medium-hard rock types are encountered. If PDC bits could be used to double current penetration rates in hard rock geothermal well-drilling costs could be reduced by 15 percent or more. PDC bits exhibit reasonable life in hard-rock wear testing using the relatively rigid setups typical of laboratorymore » testing. Unfortunately, field experience indicates otherwise. The prevailing mode of failure encountered by PDC bits returning from hard-rock formations in the field is catastrophic, presumably due to impact loading. These failures usually occur in advance of any appreciable wear that might dictate cutter replacement. Self-induced bit vibration, or ''chatter'', is one of the mechanisms that may be responsible for impact damage to PDC cutters in hard-rock drilling. Chatter is more severe in hard-rock formations since they induce significant dynamic loading on the cutter elements. Chatter is a phenomenon whereby the drillstring becomes dynamically unstable and excessive sustained vibrations occur. Unlike forced vibration, the force (i.e., weight on bit) that drives self-induced vibration is coupled with the response it produces. Many of the chatter principles derived in the machine tool industry are applicable to drilling. It is a simple matter to make changes to a machine tool to study the chatter phenomenon. This is not the case with drilling. Chatter occurs in field drilling due to the flexibility of the drillstring. Hence, laboratory setups must be made compliant to observe chatter.« less

  12. An online hybrid BCI system based on SSVEP and EMG

    NASA Astrophysics Data System (ADS)

    Lin, Ke; Cinetto, Andrea; Wang, Yijun; Chen, Xiaogang; Gao, Shangkai; Gao, Xiaorong

    2016-04-01

    Objective. A hybrid brain-computer interface (BCI) is a device combined with at least one other communication system that takes advantage of both parts to build a link between humans and machines. To increase the number of targets and the information transfer rate (ITR), electromyogram (EMG) and steady-state visual evoked potential (SSVEP) were combined to implement a hybrid BCI. A multi-choice selection method based on EMG was developed to enhance the system performance. Approach. A 60-target hybrid BCI speller was built in this study. A single trial was divided into two stages: a stimulation stage and an output selection stage. In the stimulation stage, SSVEP and EMG were used together. Every stimulus flickered at its given frequency to elicit SSVEP. All of the stimuli were divided equally into four sections with the same frequency set. The frequency of each stimulus in a section was different. SSVEPs were used to discriminate targets in the same section. Different sections were classified using EMG signals from the forearm. Subjects were asked to make different number of fists according to the target section. Canonical Correlation Analysis (CCA) and mean filtering was used to classify SSVEP and EMG separately. In the output selection stage, the top two optimal choices were given. The first choice with the highest probability of an accurate classification was the default output of the system. Subjects were required to make a fist to select the second choice only if the second choice was correct. Main results. The online results obtained from ten subjects showed that the mean accurate classification rate and ITR were 81.0% and 83.6 bits min-1 respectively only using the first choice selection. The ITR of the hybrid system was significantly higher than the ITR of any of the two single modalities (EMG: 30.7 bits min-1, SSVEP: 60.2 bits min-1). After the addition of the second choice selection and the correction task, the accurate classification rate and ITR was enhanced to 85.8% and 90.9 bit min-1. Significance. These results suggest that the hybrid system proposed here is suitable for practical use.

  13. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  14. Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator

    NASA Technical Reports Server (NTRS)

    Ranadive, V.; Sheridan, T. B.

    1981-01-01

    The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.

  15. Technology Development and Field Trials of EGS Drilling Systems at Chocolate Mountain

    DOE Data Explorer

    Steven Knudsen

    2012-01-01

    Polycrystalline diamond compact (PDC) bits are routinely used in the oil and gas industry for drilling medium to hard rock but have not been adopted for geothermal drilling, largely due to past reliability issues and higher purchase costs. The Sandia Geothermal Research Department has recently completed a field demonstration of the applicability of advanced synthetic diamond drill bits for production geothermal drilling. Two commercially-available PDC bits were tested in a geothermal drilling program in the Chocolate Mountains in Southern California. These bits drilled the granitic formations with significantly better Rate of Penetration (ROP) and bit life than the roller cone bit they are compared with. Drilling records and bit performance data along with associated drilling cost savings are presented herein. The drilling trials have demonstrated PDC bit drilling technology has matured for applicability and improvements to geothermal drilling. This will be especially beneficial for development of Enhanced Geothermal Systems whereby resources can be accessed anywhere within the continental US by drilling to deep, hot resources in hard, basement rock formations.

  16. Note: optical receiver system for 152-channel magnetoencephalography.

    PubMed

    Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong

    2014-11-01

    An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.

  17. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  18. Visual Perception Based Rate Control Algorithm for HEVC

    NASA Astrophysics Data System (ADS)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  19. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  20. Guidelines for Design and Test of a Built-In Self Test (BIST) Circuit For Space Radiation Studies of High-Speed IC Technologies

    NASA Technical Reports Server (NTRS)

    Carts, M. A.; Marshall, P. W.; Reed, R.; Curie, S.; Randall, B.; LaBel, K.; Gilbert, B.; Daniel, E.

    2006-01-01

    Serial Bit Error Rate Testing under radiation to characterize single particle induced errors in high-speed IC technologies generally involves specialized test equipment common to the telecommunications industry. As bit rates increase, testing is complicated by the rapidly increasing cost of equipment able to test at-speed. Furthermore as rates extend into the tens of billions of bits per second test equipment ceases to be broadband, a distinct disadvantage for exploring SEE mechanisms in the target technologies. In this presentation the authors detail the testing accomplished in the CREST project and apply the knowledge gained to establish a set of guidelines suitable for designing arbitrarily high speed radiation effects tests.

  1. Sagnac secret sharing over telecom fiber networks.

    PubMed

    Bogdanski, Jan; Ahrens, Johan; Bourennane, Mohamed

    2009-01-19

    We report the first Sagnac quantum secret sharing (in three-and four-party implementations) over 1550 nm single mode fiber (SMF) networks, using a single qubit protocol with phase encoding. Our secret sharing experiment has been based on a single qubit protocol, which has opened the door to practical secret sharing implementation over fiber telecom channels and in free-space. The previous quantum secret sharing proposals were based on multiparticle entangled states, difficult in the practical implementation and not scalable. Our experimental data in the three-party implementation show stable (in regards to birefringence drift) quantum secret sharing transmissions at the total Sagnac transmission loop distances of 55-75 km with the quantum bit error rates (QBER) of 2.3-2.4% for the mean photon number micro?= 0.1 and 1.7-2.1% for micro= 0.3. In the four-party case we have achieved quantum secret sharing transmissions at the total Sagnac transmission loop distances of 45-55 km with the quantum bit error rates (QBER) of 3.0-3.7% for the mean photon number micro= 0.1 and 1.8-3.0% for micro?= 0.3. The stability of quantum transmission has been achieved thanks to our new concept for compensation of SMF birefringence effects in Sagnac, based on a polarization control system and a polarization insensitive phase modulator. The measurement results have showed feasibility of quantum secret sharing over telecom fiber networks in Sagnac configuration, using standard fiber telecom components.

  2. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.« less

  3. Practical gigahertz quantum key distribution robust against channel disturbance.

    PubMed

    Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; He, De-Yong; Hui, Cong; Hao, Peng-Lei; Fan-Yuan, Guan-Jie; Wang, Chao; Zhang, Li-Jun; Kuang, Jie; Liu, Shu-Feng; Zhou, Zheng; Wang, Yong-Gang; Guo, Guang-Can; Han, Zheng-Fu

    2018-05-01

    Quantum key distribution (QKD) provides an attractive solution for secure communication. However, channel disturbance severely limits its application when a QKD system is transferred from the laboratory to the field. Here a high-speed Faraday-Sagnac-Michelson QKD system is proposed that can automatically compensate for the channel polarization disturbance, which largely avoids the intermittency limitations of environment mutation. Over a 50 km fiber channel with 30 Hz polarization scrambling, the practicality of this phase-coding QKD system was characterized with an interference fringe visibility of 99.35% over 24 h and a stable secure key rate of 306 k bits/s over seven days without active polarization alignment.

  4. Resonant Tunneling Analog-To-Digital Converter

    NASA Technical Reports Server (NTRS)

    Broekaert, T. P. E.; Seabaugh, A. C.; Hellums, J.; Taddiken, A.; Tang, H.; Teng, J.; vanderWagt, J. P. A.

    1995-01-01

    As sampling rates continue to increase, current analog-to-digital converter (ADC) device technologies will soon reach a practical resolution limit. This limit will most profoundly effect satellite and military systems used, for example, for electronic countermeasures, electronic and signal intelligence, and phased array radar. New device and circuit concepts will be essential for continued progress. We describe a novel, folded architecture ADC which could enable a technological discontinuity in ADC performance. The converter technology is based on the integration of multiple resonant tunneling diodes (RTD) and hetero-junction transistors on an indium phosphide substrate. The RTD consists of a layered semiconductor hetero-structure AlAs/InGaAs/AlAs(2/4/2 nm) clad on either side by heavily doped InGaAs contact layers. Compact quantizers based around the RTD offer a reduction in the number of components and a reduction in the input capacitance Because the component count and capacitance scale with the number of bits N, rather than by 2 (exp n) as in the flash ADC, speed can be significantly increased, A 4-bit 2-GSps quantizer circuit is under development to evaluate the performance potential. Circuit designs for ADC conversion with a resolution of 6-bits at 25GSps may be enabled by the resonant tunneling approach.

  5. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  6. High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1989-01-01

    A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.

  7. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  8. Deformation Recording Process In Polymer-Metal Bilayers And Its Use For Optical Storage

    NASA Astrophysics Data System (ADS)

    Cornet, Jean A.

    1983-11-01

    A non-antireflective polymer-metal bilayer structure, encapsulated inside a closed cons-truction/is used for digital data storage in the Thomson-CSF Gigadisc. In this paper, a simple model is presented for microdeformation recording in the medium. This model enables a good understanding of the readout signal as a function of the recording power and leads to some practical consequences. Useful polymers and metallic layers are identified and the disc performance is reported. It is shown that recording using laser diodes can be performed at bit rate up to 14 Mbits.s-1 with a laser power of 7 mW at the disc entry face, in case of a 1200 rpm disc speed. Moreover a working range of 4 mW, as defined by a 3 dB attenuation, is demonstrated. Discs from pilot production exhibit raw bit error rates at the level of 2.10-5. For usual environmental conditions, the disc behaviour is compatible with shelf-and archival life at scale of 10 years. Finally, the processes for both layers deposition and disc construction are easy and cost effective. It is concluded that Giaadisc can successfully enter today the market place.

  9. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  10. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  11. Security of six-state quantum key distribution protocol with threshold detectors

    PubMed Central

    Kato, Go; Tamaki, Kiyoshi

    2016-01-01

    The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610

  12. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  13. Enhancing Heart-Beat-Based Security for mHealth Applications.

    PubMed

    Seepers, Robert M; Strydis, Christos; Sourdis, Ioannis; De Zeeuw, Chris I

    2017-01-01

    In heart-beat-based security, a security key is derived from the time difference between consecutive heart beats (the inter-pulse interval, IPI), which may, subsequently, be used to enable secure communication. While heart-beat-based security holds promise in mobile health (mHealth) applications, there currently exists no work that provides a detailed characterization of the delivered security in a real system. In this paper, we evaluate the strength of IPI-based security keys in the context of entity authentication. We investigate several aspects that should be considered in practice, including subjects with reduced heart-rate variability (HRV), different sensor-sampling frequencies, intersensor variability (i.e., how accurate each entity may measure heart beats) as well as average and worst-case-authentication time. Contrary to the current state of the art, our evaluation demonstrates that authentication using multiple, less-entropic keys may actually increase the key strength by reducing the effects of intersensor variability. Moreover, we find that the maximal key strength of a 60-bit key varies between 29.2 bits and only 5.7 bits, depending on the subject's HRV. To improve security, we introduce the inter-multi-pulse interval (ImPI), a novel method of extracting entropy from the heart by considering the time difference between nonconsecutive heart beats. Given the same authentication time, using the ImPI for key generation increases key strength by up to 3.4 × (+19.2 bits) for subjects with limited HRV, at the cost of an extended key-generation time of 4.8 × (+45 s).

  14. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TerraTek, A Schlumberger Company

    2008-12-31

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill 'faster and deeper' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the 'ultra-high rotary speed drilling system' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm - usually well below 5,000 rpm. This document provides the progress through two phases of the program entitled 'Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling' for the period starting 30 June 2003 and concluding 31 March 2009. The accomplishments of Phases 1 and 2 are summarized as follows: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance (see Black and Judzis); (2) TerraTek designed and planned Phase I bench scale experiments (See Black and Judzis). Improvements were made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs were developed to provided a more consistent product with consistent performance. A test matrix for the final core bit testing program was completed; (3) TerraTek concluded small-scale cutting performance tests; (4) Analysis of Phase 1 data indicated that there is decreased specific energy as the rotational speed increases; (5) Technology transfer, as part of Phase 1, was accomplished with technical presentations to the industry (see Judzis, Boucher, McCammon, and Black); (6) TerraTek prepared a design concept for the high speed drilling test stand, which was planned around the proposed high speed mud motor concept. Alternative drives for the test stand were explored; a high speed hydraulic motor concept was finally used; (7) The high speed system was modified to accommodate larger drill bits than originally planned; (8) Prototype mud turbine motors and the high speed test stand were used to drive the drill bits at high speed; (9) Three different rock types were used during the testing: Sierra White granite, Crab Orchard sandstone, and Colton sandstone. The drill bits used included diamond impregnated bits, a polycrystalline diamond compact (PDC) bit, a thermally stable PDC (TSP) bit, and a hybrid TSP and natural diamond bit; and (10) The drill bits were run at rotary speeds up to 5500 rpm and weight on bit (WOB) to 8000 lbf. During Phase 2, the ROP as measured in depth of cut per bit revolution generally increased with increased WOB. The performance was mixed with increased rotary speed, with the depth cut with the impregnated drill bit generally increasing and the TSP and hybrid TSP drill bits generally decreasing. The ROP in ft/hr generally increased with all bits with increased WOB and rotary speed. The mechanical specific energy generally improved (decreased) with increased WOB and was mixed with increased rotary speed.« less

  15. Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study

    NASA Astrophysics Data System (ADS)

    Mainsah, B. O.; Collins, L. M.; Colwell, K. A.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Throckmorton, C. S.

    2015-02-01

    Objective. The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography (EEG) data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio (SNR) of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. Approach. We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute SNR of a user’s EEG data. We further enhanced the algorithm by incorporating information about the user’s language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (DS) (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Main results. Results from online testing of the DS algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/min (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the DS algorithms. Significance. We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication.

  16. Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study.

    PubMed

    Mainsah, B O; Collins, L M; Colwell, K A; Sellers, E W; Ryan, D B; Caves, K; Throckmorton, C S

    2015-02-01

    The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography (EEG) data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio (SNR) of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute SNR of a user's EEG data. We further enhanced the algorithm by incorporating information about the user's language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (DS) (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Results from online testing of the DS algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/min (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the DS algorithms. We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication.

  17. Increasing BCI Communication Rates with Dynamic Stopping Towards More Practical Use: An ALS Study

    PubMed Central

    Mainsah, B. O.; Collins, L. M.; Colwell, K. A.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Throckmorton, C. S.

    2015-01-01

    Objective The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. Approach We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute signal-to-noise ratio of a user’s electroencephalography data. We further enhanced the algorithm by incorporating information about the user’s language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Main Results Results from online testing of the dynamic stopping algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/sec (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the dynamic stopping algorithms. Significance We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication. PMID:25588137

  18. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  19. Practical quantum private query of blocks based on unbalanced-state Bennett-Brassard-1984 quantum-key-distribution protocol.

    PubMed

    Wei, Chun-Yan; Gao, Fei; Wen, Qiao-Yan; Wang, Tian-Yin

    2014-12-18

    Until now, the only kind of practical quantum private query (QPQ), quantum-key-distribution (QKD)-based QPQ, focuses on the retrieval of a single bit. In fact, meaningful message is generally composed of multiple adjacent bits (i.e., a multi-bit block). To obtain a message a1a2···al from database, the user Alice has to query l times to get each ai. In this condition, the server Bob could gain Alice's privacy once he obtains the address she queried in any of the l queries, since each a(i) contributes to the message Alice retrieves. Apparently, the longer the retrieved message is, the worse the user privacy becomes. To solve this problem, via an unbalanced-state technique and based on a variant of multi-level BB84 protocol, we present a protocol for QPQ of blocks, which allows the user to retrieve a multi-bit block from database in one query. Our protocol is somewhat like the high-dimension version of the first QKD-based QPQ protocol proposed by Jacobi et al., but some nontrivial modifications are necessary.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be; Tchitnga, Robert; Woafo, Paul

    We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bitmore » rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.« less

  1. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  2. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  3. A microcomputer-based data acquisition system for ECG, body and ambient temperatures measurement during bathing.

    PubMed

    Uokawa, Y; Yonezawa, Y; Caldwell, W M; Hahn, A W

    2000-01-01

    A data acquisition system employing a low power 8 bit microcomputer has been developed for heart rate variability monitoring before, during and after bathing. The system consists of three integral chest electrodes, two temperature sensors, an instrumentation amplifier, a low power 8-bit single chip microcomputer (SMC) and a 4 MB compact flash memory (CFM). The ECG from the electrodes is converted to an 8-bit digital format at a 1 ms rate by an A/D converter in the SMC. Both signals from the body and ambient temperature sensors are converted to an 8-bit digital format every 1 second. These data are stored by the CFM. The system is powered by a rechargeable 3.6 V lithium battery. The 4 x 11 x 1 cm system is encapsulated in epoxy and silicone, yielding a total volume of 44 cc. The weight is 100 g.

  4. A new optical post-equalization based on self-imaging

    NASA Astrophysics Data System (ADS)

    Guizani, S.; Cheriti, A.; Razzak, M.; Boulslimani, Y.; Hamam, H.

    2005-09-01

    Driven by the world's growing need for communication bandwidth, progress is constantly being reported in building newer fibers that are capable of handling the rapid increase in traffic. However, building an optical fiber link is a major investment, one that is very expensive to replace. A major impairment that restricts the achievement of higher bit rates with standard single mode fiber is chromatic dispersion. This is particularly problematic for systems operating in the 1550 nm band, where the chromatic dispersion limit decreases rapidly in inverse proportion to the square of the bit rate. For the first time, to the best of our knowledge, this document illustrates a new optical technique to post compensate optically the chromatic dispersion in fiber using temporal Talbot effect in ranges exceeding the 40G bit/s. We propose a new optical post equalization solutions based on the self imaging of Talbot effect.

  5. An overview of Space Communication Artificial Intelligence for Link Evaluation Terminal (SCAILET) Project

    NASA Technical Reports Server (NTRS)

    Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.

    1991-01-01

    A software application to assist end-users of the link evaluation terminal (LET) for satellite communications is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving (220/110 Mbps) capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. The HBR LET can determine the bit error rate (BER) under various atmospheric conditions by comparing the transmitted bit pattern with the received bit pattern. An algorithm for power augmentation will be applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions.

  6. A wide bandwidth CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K.; Wallace, R. W.; Robinson, C. R.

    1978-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.

  7. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    PubMed

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  8. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  9. Single and Multi-Pulse Low-Energy Conical Theta Pinch Inductive Pulsed Plasma Thruster Performance

    NASA Technical Reports Server (NTRS)

    Hallock, Ashley K.; Martin, Adam; Polzin, Kurt; Kimberlin, Adam; Eskridge, Richard

    2013-01-01

    Fabricated and tested CTP IPPTs at cone angles of 20deg, 38deg, and 60deg, and performed direct single-pulse impulse bit measurements with continuous gas flow. Single pulse performance highest for 38deg angle with impulse bit of approx.1 mN-s for both argon and xenon. Estimated efficiencies low, but not unexpectedly so based on historical data trends and the direction of the force vector in the CTP. Capacitor charging system assembled to provide rapid recharging of capacitor bank, permitting repetition-rate operation. IPPT operated at repetition-rate of 5 Hz, at maximum average power of 2.5 kW, representing to our knowledge the highest average power for a repetitively-pulsed thruster. Average thrust in repetition-rate mode (at 5 kV, 75 sccm argon) was greater than simply multiplying the single-pulse impulse bit and the repetition rate.

  10. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  11. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE PAGES

    Qi, Bing

    2016-10-26

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  12. Hash function based on chaotic map lattices.

    PubMed

    Wang, Shihong; Hu, Gang

    2007-06-01

    A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.

  13. Hash function based on chaotic map lattices

    NASA Astrophysics Data System (ADS)

    Wang, Shihong; Hu, Gang

    2007-06-01

    A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.

  14. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    NASA Astrophysics Data System (ADS)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.

  15. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  16. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  17. Skyrmion-skyrmion and skyrmion-edge repulsions in skyrmion-based racetrack memory

    NASA Astrophysics Data System (ADS)

    Zhang, Xichao; Zhao, G. P.; Fangohr, Hans; Liu, J. Ping; Xia, W. X.; Xia, J.; Morvan, F. J.

    2015-01-01

    Magnetic skyrmions are promising for building next-generation magnetic memories and spintronic devices due to their stability, small size and the extremely low currents needed to move them. In particular, skyrmion-based racetrack memory is attractive for information technology, where skyrmions are used to store information as data bits instead of traditional domain walls. Here we numerically demonstrate the impacts of skyrmion-skyrmion and skyrmion-edge repulsions on the feasibility of skyrmion-based racetrack memory. The reliable and practicable spacing between consecutive skyrmionic bits on the racetrack as well as the ability to adjust it are investigated. Clogging of skyrmionic bits is found at the end of the racetrack, leading to the reduction of skyrmion size. Further, we demonstrate an effective and simple method to avoid the clogging of skyrmionic bits, which ensures the elimination of skyrmionic bits beyond the reading element. Our results give guidance for the design and development of future skyrmion-based racetrack memory.

  18. Deterministic MDI QKD with two secret bits per shared entangled pair

    NASA Astrophysics Data System (ADS)

    Zebboudj, Sofia; Omar, Mawloud

    2018-03-01

    Although quantum key distribution schemes have been proven theoretically secure, they are based on assumptions about the devices that are not yet satisfied with today's technology. The measurement-device-independent scheme has been proposed to shorten the gap between theory and practice by removing all detector side-channel attacks. On the other hand, two-way quantum key distribution schemes have been proposed to raise the secret key generation rate. In this paper, we propose a new quantum key distribution scheme able to achieve a relatively high secret key generation rate based on two-way quantum key distribution that also inherits the robustness of the measurement-device-independent scheme against detector side-channel attacks.

  19. Field trial of differential-phase-shift quantum key distribution using polarization independent frequency up-conversion detectors.

    PubMed

    Honjo, T; Yamamoto, S; Yamamoto, T; Kamada, H; Nishida, Y; Tadanaga, O; Asobe, M; Inoue, K

    2007-11-26

    We report a field trial of differential phase shift quantum key distribution (QKD) using polarization independent frequency up-conversion detectors. A frequency up-conversion detector is a promising device for achieving a high key generation rate when combined with a high clock rate QKD system. However, its polarization dependence prevents it from being applied to practical QKD systems. In this paper, we employ a modified polarization diversity configuration to eliminate the polarization dependence. Applying this method, we performed a long-term stability test using a 17.6-km installed fiber. We successfully demonstrated stable operation for 6 hours and achieved a sifted key generation rate of 120 kbps and an average quantum bit error rate of 3.14 %. The sifted key generation rate was not the estimated value but the effective value, which means that the sifted key was continuously generated at a rate of 120 kbps for 6 hours.

  20. Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications

    DTIC Science & Technology

    2010-09-01

    four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of

  1. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Astrophysics Data System (ADS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  2. Subjective quality evaluation of low-bit-rate video

    NASA Astrophysics Data System (ADS)

    Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.

    2001-06-01

    A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.

  3. A Dynamic Model for C3 Information Incorporating the Effects of Counter C3

    DTIC Science & Technology

    1980-12-01

    birth and death rates exactly cancel one another and H = 0. Although this simple first order linear system is not very sophisti- cated, we see...per hour and refer to the average behavior of the entire system ensemble much as species birth and death rates are typically measured in births (or...unit time) iii) VTX, VIY ; Uncertainty Death Rates resulting from data inputs (bits/bit per unit time) 3 -1 iv) YYV» YvY > Counter C

  4. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  5. Enhancing Performance and Bit Rates in a Brain-Computer Interface System With Phase-to-Amplitude Cross-Frequency Coupling: Evidences From Traditional c-VEP, Fast c-VEP, and SSVEP Designs.

    PubMed

    Dimitriadis, Stavros I; Marimpis, Avraam D

    2018-01-01

    A brain-computer interface (BCI) is a channel of communication that transforms brain activity into specific commands for manipulating a personal computer or other home or electrical devices. In other words, a BCI is an alternative way of interacting with the environment by using brain activity instead of muscles and nerves. For that reason, BCI systems are of high clinical value for targeted populations suffering from neurological disorders. In this paper, we present a new processing approach in three publicly available BCI data sets: (a) a well-known multi-class ( N = 6) coded-modulated Visual Evoked potential (c-VEP)-based BCI system for able-bodied and disabled subjects; (b) a multi-class ( N = 32) c-VEP with slow and fast stimulus representation; and (c) a steady-state Visual Evoked potential (SSVEP) multi-class ( N = 5) flickering BCI system. Estimating cross-frequency coupling (CFC) and namely δ-θ [δ: (0.5-4 Hz), θ: (4-8 Hz)] phase-to-amplitude coupling (PAC) within sensor and across experimental time, we succeeded in achieving high classification accuracy and Information Transfer Rates (ITR) in the three data sets. Our approach outperformed the originally presented ITR on the three data sets. The bit rates obtained for both the disabled and able-bodied subjects reached the fastest reported level of 324 bits/min with the PAC estimator. Additionally, our approach outperformed alternative signal features such as the relative power (29.73 bits/min) and raw time series analysis (24.93 bits/min) and also the original reported bit rates of 10-25 bits/min . In the second data set, we succeeded in achieving an average ITR of 124.40 ± 11.68 for the slow 60 Hz and an average ITR of 233.99 ± 15.75 for the fast 120 Hz. In the third data set, we succeeded in achieving an average ITR of 106.44 ± 8.94. Current methodology outperforms any previous methodologies applied to each of the three free available BCI datasets.

  6. Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.

    PubMed

    Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W

    2014-01-27

    We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.

  7. Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression

    NASA Astrophysics Data System (ADS)

    Mann, Y.; Peretz, Y.; Mitchell, Harvey B.

    2001-09-01

    Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.

  8. Study on the Effect of Diamond Grain Size on Wear of Polycrystalline Diamond Compact Cutter

    NASA Astrophysics Data System (ADS)

    Abdul-Rani, A. M.; Che Sidid, Adib Akmal Bin; Adzis, Azri Hamim Ab

    2018-03-01

    Drilling operation is one of the most crucial step in oil and gas industry as it proves the availability of oil and gas under the ground. Polycrystalline Diamond Compact (PDC) bit is a type of bit which is gaining popularity due to its high Rate of Penetration (ROP). However, PDC bit can easily wear off especially when drilling hard rock. The purpose of this study is to identify the relationship between the grain sizes of the diamond and wear rate of the PDC cutter using simulation-based study with FEA software (ABAQUS). The wear rates of a PDC cutter with a different diamond grain sizes were calculated from simulated cuttings of cutters against granite. The result of this study shows that the smaller the diamond grain size, the higher the wear resistivity of PDC cutter.

  9. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, IV, Amos M.

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  10. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE PAGES

    Smith, IV, Amos M.

    2017-01-12

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  11. Methodology and method and apparatus for signaling with capacity optimized constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2011-01-01

    Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.

  12. Laboratory Equipment for Investigation of Coring Under Mars-like Conditions

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Cooper, G.

    2004-12-01

    To develop a suitable drill bit and set of operating conditions for Mars sample coring applications, it is essential to make tests under conditions that match those of the mission. The goal of the laboratory test program was to determine the drilling performance of diamond-impregnated bits under simulated Martian conditions, particularly those of low pressure and low temperature in a carbon dioxide atmosphere. For this purpose, drilling tests were performed in a vacuum chamber kept at a pressure of 5 torr. Prior to drilling, a rock, soil or a clay sample was cooled down to minus 80 degrees Celsius (Zacny et al, 2004). Thus, all Martian conditions, except the low gravity were simulated in the controlled environment. Input drilling parameters of interest included the weight on bit and rotational speed. These two independent variables were controlled from a PC station. The dependent variables included the bit reaction torque, the depth of the bit inside the drilled hole and the temperatures at various positions inside the drilled sample, in the center of the core as it was being cut and at the bit itself. These were acquired every second by a data acquisition system. Additional information such as the rate of penetration and the drill power were calculated after the test was completed. The weight of the rock and the bit prior to and after the test were measured to aid in evaluating the bit performance. In addition, the water saturation of the rock was measured prior to the test. Finally, the bit was viewed under the Scanning Electron Microscope and the Stereo Optical Microscope. The extent of the bit wear and its salient features were captured photographically. The results revealed that drilling or coring under Martian conditions in a water saturated rock is different in many respects from drilling on Earth. This is mainly because the Martian atmospheric pressure is in the vicinity of the pressure at the triple point of water. Thus ice, heated by contact with the rotating bit, sublimed and released water vapor. The volumetric expansion of ice turning into a vapor was over 150 000 times. This continuously generated volume of gas effectively cleared the freeze-dried rock cuttings from the bottom of the hole. In addition, the subliming ice provided a powerful cooling effect that kept the bit cold and preserved the core in its original state. Keeping the rock core below freezing also reduced drastically the chances of cross contamination. To keep the bit cool in near vacuum conditions where convective cooling is poor, some intermittent stops would have to be made. Under virtually the same drilling conditions, coring under Martian low temperature and pressure conditions consumed only half the power while doubling the rate of penetration as compared to drilling under Earth atmospheric conditions. However, the rate of bit wear was much higher under Martian conditions (Zacny and Cooper, 2004) References Zacny, K. A., M. C. Quayle, and G. A. Cooper (2004), Laboratory drilling under Martian conditions yields unexpected results, J. Geophys. Res., 109, E07S16, doi:10.1029/2003JE002203. Zacny, K. A., and G. A. Cooper (2004), Investigation of diamond-impregnated drill bit wear while drilling under Earth and Mars conditions, J. Geophys. Res., 109, E07S10, doi:10.1029/2003JE002204. Acknowledgments The research supported by the NASA Astrobiology, Science and Technology Instrument Development (ASTID) program.

  13. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  14. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  15. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    NASA Technical Reports Server (NTRS)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  16. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  17. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  18. Fast and memory efficient text image compression with JBIG2.

    PubMed

    Ye, Yan; Cosman, Pamela

    2003-01-01

    In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  20. Analog Correlator Based on One Bit Digital Correlator

    NASA Technical Reports Server (NTRS)

    Prokop, Norman (Inventor); Krasowski, Michael (Inventor)

    2017-01-01

    A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.

  1. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  2. Quantum key distribution in a multi-user network at gigahertz clock rates

    NASA Astrophysics Data System (ADS)

    Fernandez, Veronica; Gordon, Karen J.; Collins, Robert J.; Townsend, Paul D.; Cova, Sergio D.; Rech, Ivan; Buller, Gerald S.

    2005-07-01

    In recent years quantum information research has lead to the discovery of a number of remarkable new paradigms for information processing and communication. These developments include quantum cryptography schemes that offer unconditionally secure information transport guaranteed by quantum-mechanical laws. Such potentially disruptive security technologies could be of high strategic and economic value in the future. Two major issues confronting researchers in this field are the transmission range (typically <100km) and the key exchange rate, which can be as low as a few bits per second at long optical fiber distances. This paper describes further research of an approach to significantly enhance the key exchange rate in an optical fiber system at distances in the range of 1-20km. We will present results on a number of application scenarios, including point-to-point links and multi-user networks. Quantum key distribution systems have been developed, which use standard telecommunications optical fiber, and which are capable of operating at clock rates of up to 2GHz. They implement a polarization-encoded version of the B92 protocol and employ vertical-cavity surface-emitting lasers with emission wavelengths of 850 nm as weak coherent light sources, as well as silicon single-photon avalanche diodes as the single photon detectors. The point-to-point quantum key distribution system exhibited a quantum bit error rate of 1.4%, and an estimated net bit rate greater than 100,000 bits-1 for a 4.2 km transmission range.

  3. Dry and noncontact EEG sensors for mobile brain-computer interfaces.

    PubMed

    Chi, Yu Mike; Wang, Yu-Te; Wang, Yijun; Maier, Christoph; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2012-03-01

    Dry and noncontact electroencephalographic (EEG) electrodes, which do not require gel or even direct scalp coupling, have been considered as an enabler of practical, real-world, brain-computer interface (BCI) platforms. This study compares wet electrodes to dry and through hair, noncontact electrodes within a steady state visual evoked potential (SSVEP) BCI paradigm. The construction of a dry contact electrode, featuring fingered contact posts and active buffering circuitry is presented. Additionally, the development of a new, noncontact, capacitive electrode that utilizes a custom integrated, high-impedance analog front-end is introduced. Offline tests on 10 subjects characterize the signal quality from the different electrodes and demonstrate that acquisition of small amplitude, SSVEP signals is possible, even through hair using the new integrated noncontact sensor. Online BCI experiments demonstrate that the information transfer rate (ITR) with the dry electrodes is comparable to that of wet electrodes, completely without the need for gel or other conductive media. In addition, data from the noncontact electrode, operating on the top of hair, show a maximum ITR in excess of 19 bits/min at 100% accuracy (versus 29.2 bits/min for wet electrodes and 34.4 bits/min for dry electrodes), a level that has never been demonstrated before. The results of these experiments show that both dry and noncontact electrodes, with further development, may become a viable tool for both future mobile BCI and general EEG applications.

  4. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  5. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  6. Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations

    NASA Astrophysics Data System (ADS)

    Bourgoin, Jean-Philippe; Gigov, Nikolay; Higgins, Brendon L.; Yan, Zhizhong; Meyer-Scott, Evan; Khandani, Amir K.; Lütkenhaus, Norbert; Jennewein, Thomas

    2015-11-01

    Quantum key distribution (QKD) has the potential to improve communications security by offering cryptographic keys whose security relies on the fundamental properties of quantum physics. The use of a trusted quantum receiver on an orbiting satellite is the most practical near-term solution to the challenge of achieving long-distance (global-scale) QKD, currently limited to a few hundred kilometers on the ground. This scenario presents unique challenges, such as high photon losses and restricted classical data transmission and processing power due to the limitations of a typical satellite platform. Here we demonstrate the feasibility of such a system by implementing a QKD protocol, with optical transmission and full post-processing, in the high-loss regime using minimized computing hardware at the receiver. Employing weak coherent pulses with decoy states, we demonstrate the production of secure key bits at up to 56.5 dB of photon loss. We further illustrate the feasibility of a satellite uplink by generating a secure key while experimentally emulating the varying losses predicted for realistic low-Earth-orbit satellite passes at 600 km altitude. With a 76 MHz source and including finite-size analysis, we extract 3374 bits of a secure key from the best pass. We also illustrate the potential benefit of combining multiple passes together: while one suboptimal "upper-quartile" pass produces no finite-sized key with our source, the combination of three such passes allows us to extract 165 bits of a secure key. Alternatively, we find that by increasing the signal rate to 300 MHz it would be possible to extract 21 570 bits of a secure finite-sized key in just a single upper-quartile pass.

  7. The changing face of P300 BCIs: a comparison of stimulus changes in a P300 BCI involving faces, emotion, and movement.

    PubMed

    Jin, Jing; Allison, Brendan Z; Kaufmann, Tobias; Kübler, Andrea; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2012-01-01

    One of the most common types of brain-computer interfaces (BCIs) is called a P300 BCI, since it relies on the P300 and other event-related potentials (ERPs). In the canonical P300 BCI approach, items on a monitor flash briefly to elicit the necessary ERPs. Very recent work has shown that this approach may yield lower performance than alternate paradigms in which the items do not flash but instead change in other ways, such as moving, changing colour or changing to characters overlaid with faces. The present study sought to extend this research direction by parametrically comparing different ways to change items in a P300 BCI. Healthy subjects used a P300 BCI across six different conditions. Three conditions were similar to our prior work, providing the first direct comparison of characters flashing, moving, and changing to faces. Three new conditions also explored facial motion and emotional expression. The six conditions were compared across objective measures such as classification accuracy and bit rate as well as subjective measures such as perceived difficulty. In line with recent studies, our results indicated that the character flash condition resulted in the lowest accuracy and bit rate. All four face conditions (mean accuracy >91%) yielded significantly better performance than the flash condition (mean accuracy = 75%). Objective results reaffirmed that the face paradigm is superior to the canonical flash approach that has dominated P300 BCIs for over 20 years. The subjective reports indicated that the conditions that yielded better performance were not considered especially burdensome. Therefore, although further work is needed to identify which face paradigm is best, it is clear that the canonical flash approach should be replaced with a face paradigm when aiming at increasing bit rate. However, the face paradigm has to be further explored with practical applications particularly with locked-in patients.

  8. 50 Mbps free space direct detection laser diode optical communication system with Q = 4 PPM signaling

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Davidson, Frederic; Field, Christopher

    1990-01-01

    A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.

  9. Application of Rosenbrock search technique to reduce the drilling cost of a well in Bai-Hassan oil field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aswad, Z.A.R.; Al-Hadad, S.M.S.

    1983-03-01

    The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less

  10. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  11. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  12. QoS mapping algorithm for ETE QoS provisioning

    NASA Astrophysics Data System (ADS)

    Wu, Jian J.; Foster, Gerry

    2002-08-01

    End-to-End (ETE) Quality of Service (QoS) is critical for next generation wireless multimedia communication systems. To meet the ETE QoS requirements, Universal Mobile Telecommunication System (UMTS) requires not only meeting the 3GPP QoS requirements [1-2] but also mapping external network QoS classes to UMTS QoS classes. There are four Quality of Services (QoS) classes in UMTS; they are Conversational, Streaming, Interactive and Background. There are eight QoS classes for LAN in IEEE 802.1 (one reserved). ATM has four QoS categories. They are Constant Bit Rate (CBR) - highest priority, short queue for strict Cell Delay Variation (CDV), Variable Bit Rate (VBR) - second highest priority, short queues for real time, longer queues for non-real time, Guaranteed Frame Rate (GFR)/ Unspecified Bit Rate (UBR) with Minimum Desired Cell Rate (MDCR) - intermediate priority, dependent on service provider UBR/ Available Bit Rate (ABR) - lowest priority, long queues, large delay variation. DiffServ (DS) has six-bit DS codepoint (DSCP) available to determine the datagram's priority relative to other datagrams and therefore, up to 64 QoS classes are available from the IPv4 and IPv6 DSCP. Different organisations have tried to solve the QoS issues from their own perspective. However, none of them has a full picture for end-to-end QoS classes and how to map them among all QoS classes. Therefore, a universal QoS needs to be created and a new set of QoS classes to enable end-to-end (ETE) QoS provisioning is required. In this paper, a new set of ETE QoS classes is proposed and a mappings algorithm for different QoS classes that are proposed by different organisations is given. With our proposal, ETE QoS mapping and control can be implemented.

  13. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  14. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  15. Generation and transmission of DPSK signals using a directly modulated passive feedback laser.

    PubMed

    Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C

    2012-12-10

    The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.

  16. The transmission of low frequency medical data using delta modulation techniques.

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Dawson, C. T.

    1972-01-01

    The transmission of low-frequency medical data using delta modulation techniques is described. The delta modulators are used to distribute the low-frequency data into the passband of the telephone lines. Both adaptive and linear delta modulators are considered. Optimum bit rates to minimize distortion and intersymbol interference are discussed. Vibrocardiographic waves are analyzed as a function of bit rate and delta modulator configuration to determine their reproducibility for medical evaluation.

  17. Present state of HDTV coding in Japan and future prospect

    NASA Astrophysics Data System (ADS)

    Murakami, Hitomi

    The development status of HDTV digital codecs in Japan is evaluated; several bit rate-reduction codecs have been developed for 1125 lines/60-field HDTV, and performance trials have been conducted through satellite and optical fiber links. Prospective development efforts will attempt to achieve more efficient coding schemes able to reduce the bit rate to as little as 45 Mbps, as well as to apply coding schemes to automated teller machine networks.

  18. Design of pseudo-symmetric high bit rate, bend insensitive optical fiber applicable for high speed FTTH

    NASA Astrophysics Data System (ADS)

    Makouei, Somayeh; Koozekanani, Z. D.

    2014-12-01

    In this paper, with sophisticated modification on modal-field distribution and introducing new design procedure, the single-mode fiber with ultra-low bending-loss and pseudo-symmetric high bit-rate of uplink and downlink, appropriate for fiber-to-the-home (FTTH) operation is presented. The bending-loss reduction and dispersion management are done by the means of Genetic Algorithm. The remarkable feature of this methodology is designing a bend-insensitive fiber without reduction of core radius and MFD. Simulation results show bending loss of 1.27×10-2 dB/turn at 1.55 μm for 5 mm curvature radius. The MFD and Aeff are 9.03 μm and 59.11 μm2. Moreover, the upstream and downstream bit-rates are approximately 2.38 Gbit/s-km and 3.05 Gbit/s-km.

  19. Transmission of 2.5 Gbit/s Spectrum-sliced WDM System for 50 km Single-mode Fiber

    NASA Astrophysics Data System (ADS)

    Ahmed, Nasim; Aljunid, Sayed Alwee; Ahmad, R. Badlisha; Fadil, Hilal Adnan; Rashid, Mohd Abdur

    2011-06-01

    The transmission of a spectrum-sliced WDM channel at 2.5 Gbit/s for 50 km of single mode fiber using an system channel spacing only 0.4 nm is reported. We have investigated the system performance using NRZ modulation format. The proposed system is compared with conventional system. The system performance is characterized as the bit-error-rate (BER) received against the system bit rates. Simulation results show that the NRZ modulation format performs well for 2.5 Gbit/s system bit rates. Using this narrow channel spectrum-sliced technique, the total number of multiplexed channels can be increased greatly in WDM system. Therefore, 0.4 nm channel spacing spectrum-sliced WDM system is highly recommended for the long distance optical access networks, like the Metro Area Network (MAN), Fiber-to-the-Building (FTTB) and Fiber-to-the-Home (FTTH).

  20. Entangled quantum key distribution over two free-space optical links.

    PubMed

    Erven, C; Couteau, C; Laflamme, R; Weihs, G

    2008-10-13

    We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.

  1. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Konz, Daniel W. (Inventor); Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Winkelmann, Joseph P. (Inventor)

    2006-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted into digital signals and transmitted back to the controller. In one embodiment, the bus controller sends commands and data a defined bit rate, and the network device interface senses this bit rate and sends data back to the bus controller using the defined bit rate.

  2. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  3. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  4. A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip

    NASA Technical Reports Server (NTRS)

    Timoc, C.; Tran, T.; Wongso, J.

    1992-01-01

    This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.

  5. Real-time implementation of second generation of audio multilevel information coding

    NASA Astrophysics Data System (ADS)

    Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.

    1994-03-01

    This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.

  6. Direct bit detection receiver noise performance analysis for 32-PSK and 64-PSK modulated signals

    NASA Astrophysics Data System (ADS)

    Ahmed, Iftikhar

    1987-12-01

    Simple two channel receivers for 32-PSK and 64-PSK modulated signals have been proposed which allow digital data (namely bits), to be recovered directly instead of the traditional approach of symbol detection followed by symbol to bit mappings. This allows for binary rather than M-ary receiver decisions, reduces the amount of signal processing operations and permits parallel recovery of the bits. The noise performance of these receivers quantified by the Bit Error Rate (BER) assuming an Additive White Gaussian Noise interference model is evaluated as a function of Eb/No, the signal to noise ratio, and transmitted phase angles of the signals. The performance results of the direct bit detection receivers (DBDR) when compared to that of convectional phase measurement receivers demonstrate that DBDR's are optimum in BER sense. The simplicity of the receiver implementations and the BER of the delivered data make DBDR's attractive for high speed, spectrally efficient digital communication systems.

  7. Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results

    NASA Astrophysics Data System (ADS)

    Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.

    2005-12-01

    We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.

  8. A Tuned-RF Duty-Cycled Wake-Up Receiver with −90 dBm Sensitivity

    PubMed Central

    Derbel, Faouzi; Kanoun, Olfa

    2017-01-01

    A novel wake-up receiver for wireless sensor networks is introduced. It operates with a modified medium access protocol (MAC), allowing low-energy consumption and practical latency. The ultra-low-power wake-up receiver operates with enhanced duty-cycled listening. The analysis of energy models of the duty-cycle-based communication is presented. All the WuRx blocks are studied to obey the duty-cycle operation. For a mean interval time for the data exchange cycle between a transmitter and a receiver over 1.7 s and a 64-bit wake-up packet detection latency of 32 ms, the average power consumption of the wake-up receiver (WuRx) reaches down to 3 μW. It also features scalable addressing of more than 512 bit at a data rate of 128kbit/s−1. At a wake-up packet error rate of 10−2, the detection sensitivity reaches a minimum of −90 dBm. The combination of the MAC protocol and the WuRx eases the adoption of different kinds of wireless sensor networks. In low traffic communication, the WuRx dramatically saves more energy than that of a network that is implementing conventional duty-cycling. In this work, a prototype was realized to evaluate the intended performance. PMID:29286345

  9. A new thermal model for bone drilling with applications to orthopaedic surgery.

    PubMed

    Lee, JuEun; Rabin, Yoed; Ozdoganlar, O Burak

    2011-12-01

    This paper presents a new thermal model for bone drilling with applications to orthopaedic surgery. The new model combines a unique heat-balance equation for the system of the drill bit and the chip stream, an ordinary heat diffusion equation for the bone, and heat generation at the drill tip, arising from the cutting process and friction. Modeling of the drill bit-chip stream system assumes an axial temperature distribution and a lumped heat capacity effect in the transverse cross-section. The new model is solved numerically using a tailor-made finite-difference scheme for the drill bit-chip stream system, coupled with a classic finite-difference method for the bone. The theoretical investigation addresses the significance of heat transfer between the drill bit and the bone, heat convection from the drill bit to the surroundings, and the effect of the initial temperature of the drill bit on the developing thermal field. Using the new model, a parametric study on the effects of machining conditions and drill-bit geometries on the resulting temperature field in the bone and the drill bit is presented. Results of this study indicate that: (1) the maximum temperature in the bone decreases with increased chip flow; (2) the transient temperature distribution is strongly influenced by the initial temperature; (3) the continued cooling (irrigation) of the drill bit reduces the maximum temperature even when the tip is distant from the cooled portion of the drill bit; and (4) the maximum temperature increases with increasing spindle speed, increasing feed rate, decreasing drill-bit diameter, increasing point angle, and decreasing helix angle. The model is expected to be useful in determination of optimum drilling conditions and drill-bit geometries. Copyright © 2011. Published by Elsevier Ltd.

  10. Internet research: improving traditional community analysis before launching a practice.

    PubMed

    Barresi, B; Scott, C

    2000-01-01

    Optometric practice management experts have always recommended that optometrists thoroughly research the communities in which they are considering practicing. Until the Internet came along, demographic research was possible but often daunting. Today, say these authors, it's becoming quite a bit easier ... and they show us how.

  11. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    NASA Astrophysics Data System (ADS)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  12. All-optical electron spin quantum computer with ancilla bits for operations in each coupled-dot cell

    NASA Astrophysics Data System (ADS)

    Ohshima, Toshio

    2000-12-01

    A cellular quantum computer with a spin qubit and ancilla bits in each cell is proposed. The whole circuit works only with the help of external optical pulse sequences. In the operation, some of the ancilla bits are activated, and autonomous single-and two-qubit operations are made. In the sleep mode of a cell, the decoherence of the qubit is negligibly small. Since only two cells at most are active at once, the coherence can be maintained for a sufficiently long time for practical purposes. A device structure using a coupled-quantum-dot array with possible operation and measurement schemes is also proposed.

  13. Bubble memory module

    NASA Technical Reports Server (NTRS)

    Bohning, O. D.; Becker, F. J.

    1980-01-01

    Design, fabrication and test of partially populated prototype recorder using 100 kilobit serial chips is described. Electrical interface, operating modes, and mechanical design of several module configurations are discussed. Fabrication and test of the module demonstrated the practicality of multiplexing resulting in lower power, weight, and volume. This effort resulted in the completion of a module consisting of a fully engineered printed circuit storage board populated with 5 of 8 possible cells and a wire wrapped electronics board. Interface of the module is 16 bits parallel at a maximum of 1.33 megabits per second data rate on either of two interface buses.

  14. Getting something out of nothing in the measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Tan, Yong-Gang; Cai, Qing-Yu; Yang, Hai-Feng; Hu, Yao-Hua

    2015-11-01

    Because of the monogamy of entanglement, the measurement-device-independent quantum key distribution is immune to the side-information leaking of the measurement devices. When the correlated measurement outcomes are generated from the dark counts, no entanglement is actually obtained. However, secure key bits can still be proven to be generated from these measurement outcomes. Especially, we will give numerical studies on the contributions of dark counts to the key generation rate in practical decoy state MDI-QKD where a signal source, a weaker decoy source and a vacuum decoy source are used by either legitimate key distributer.

  15. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    NASA Astrophysics Data System (ADS)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  16. Layered video transmission over multirate DS-CDMA wireless systems

    NASA Astrophysics Data System (ADS)

    Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.

    2003-05-01

    n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.

  17. Fly Photoreceptors Demonstrate Energy-Information Trade-Offs in Neural Coding

    PubMed Central

    Niven, Jeremy E; Anderson, John C; Laughlin, Simon B

    2007-01-01

    Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1–6 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, ∼20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from ∼200 bits s−1 in D. melanogaster to ∼1,000 bits s−1 in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput. PMID:17373859

  18. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  19. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  20. A novel PON-based mobile distributed cluster of antennas approach to provide impartial and broadband services to end users

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Saddawi, Samir; Moghaddassi, Jalil; Hussain, Shahab; Zaidi, Syed R.

    2010-01-01

    In this research paper we propose a novel Passive Optical Network (PON) based Mobile Worldwide Interoperability for Microwave Access (WiMAX) access network architecture to provide high capacity and performance multimedia services to mobile WiMAX users. Passive Optical Networks (PON) networks do not require powered equipment; hence they cost lower and need less network management. WiMAX technology emerges as a viable candidate for the last mile solution. In the conventional WiMAX access networks, the base stations and Multiple Input Multiple Output (MIMO) antennas are connected by point to point lines. Ideally in theory, the Maximum WiMAX bandwidth is assumed to be 70 Mbit/s over 31 miles. In reality, WiMAX can only provide one or the other as when operating over maximum range, bit error rate increases and therefore it is required to use lower bit rate. Lowering the range allows a device to operate at higher bit rates. Our focus in this research paper is to increase both range and bit rate by utilizing distributed cluster of MIMO antennas connected to WiMAX base stations with PON based topologies. A novel quality of service (QoS) algorithm is also proposed to provide admission control and scheduling to serve classified traffic. The proposed architecture presents flexible and scalable system design with different performance requirements and complexity.

  1. Improving TCP Network Performance by Detecting and Reacting to Packet Reordering

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Ostermann, Shawn; Allman, Mark

    2003-01-01

    There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)

  2. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  3. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-12-01

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.

  4. Two-dimensional distributed-phase-reference protocol for quantum key distribution.

    PubMed

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-12-22

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.

  5. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    PubMed Central

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-01-01

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable. PMID:28004821

  6. Reducing temperature elevation of robotic bone drilling.

    PubMed

    Feldmann, Arne; Wandel, Jasmin; Zysset, Philippe

    2016-12-01

    This research work aims at reducing temperature elevation of bone drilling. An extensive experimental study was conducted which focused on the investigation of three main measures to reduce the temperature elevation as used in industry: irrigation, interval drilling and drill bit designs. Different external irrigation rates (0 ml/min, 15 ml/min, 30 ml/min), continuously drilled interval lengths (2 mm, 1 mm, 0.5 mm) as well as two drill bit designs were tested. A custom single flute drill bit was designed with a higher rake angle and smaller chisel edge to generate less heat compared to a standard surgical drill bit. A new experimental setup was developed to measure drilling forces and torques as well as the 2D temperature field at any depth using a high resolution thermal camera. The results show that external irrigation is a main factor to reduce temperature elevation due not primarily to its effect on cooling but rather due to the prevention of drill bit clogging. During drilling, the build up of bone material in the drill bit flutes result in excessive temperatures due to an increase in thrust forces and torques. Drilling in intervals allows the removal of bone chips and cleaning of flutes when the drill bit is extracted as well as cooling of the bone in-between intervals which limits the accumulation of heat. However, reducing the length of the drilled interval was found only to be beneficial for temperature reduction using the newly designed drill bit due to the improved cutting geometry. To evaluate possible tissue damage caused by the generated heat increase, cumulative equivalent minutes (CEM43) were calculated and it was found that the combination of small interval length (0.5 mm), high irrigation rate (30 ml/min) and the newly designed drill bit was the only parameter combination which allowed drilling below the time-thermal threshold for tissue damage. In conclusion, an optimized drilling method has been found which might also enable drilling in more delicate procedures such as that performed during minimally invasive robotic cochlear implantation. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Energy-efficient human body communication receiver chipset using wideband signaling scheme.

    PubMed

    Song, Seong-Jun; Cho, Namjun; Kim, Sunyoung; Yoo, Hoi-Jun

    2007-01-01

    This paper presents an energy-efficient wideband signaling receiver for communication channels using the human body as a data transmission medium. The wideband signaling scheme with the direct-coupled interface provides the energy-efficient transmission of multimedia data around the human body. The wideband signaling receiver incorporates with a receiver AFE exploiting wideband symmetric triggering technique and an all-digital CDR circuit with quadratic sampling technique. The AFE operates at 10-Mb/s data rate with input sensitivity of -27dBm and the operational bandwidth of 200-MHz. The CDR recovers clock and data of 2-Mb/s at a bit error rate of 10(-7). The receiver chipset consumes only 5-mW from a 1-V supply, thereby achieving the bit energy of 2.5-nJ/bit.

  8. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.

    PubMed

    Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  9. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    PubMed Central

    Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control. PMID:26690500

  10. Node synchronization schemes for the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Swanson, L.; Arnold, S.

    1992-01-01

    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.

  11. Effects of pore pressure and mud filtration on drilling rates in a permeable sandstone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; DiBona, B.; Sandstrom, J.

    1983-10-01

    During laboratory drilling tests in a permeable sandstone, the effects of pore pressure and mud filtration on penetration rates were measured. Four water-base muds were used to drill four saturated sandstone samples. The drilling tests were conducted at constant borehole pressure with different back pressures maintained on the filtrate flowing from the bottom of the sandstone samples. Bit weight was also varied. Filtration rates were measured while drilling and with the bit off bottom and mud circulating. Penetration rates were found to be related to the difference between the filtration rates measured while drilling and circulating. There was no observedmore » correlation between standard API filtration measurements and penetration rate.« less

  12. Effects of pore pressure and mud filtration on drilling rates in a permeable sandstone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; Dearing, H.L.; DiBona, B.G.

    1985-09-01

    During laboratory drilling tests in a permeable sandstone, the effects of pore pressure and mud filtration on penetration rates were measured. Four water-based muds were used to drill four saturated sandstone samples. The drilling tests were conducted at constant borehole pressure while different backpressures were maintained on the filtrate flowing from the bottom of the sandstone samples. Bit weight was varied also. Filtration rates were measured while circulating mud during drilling and with the bit off bottom. Penetration rates were found to be related qualitatively to the difference between the filtration rates measured while drilling and circulating. There was nomore » observed correlation between standard API filtration measurements and penetration rate.« less

  13. Quantum communication for satellite-to-ground networks with partially entangled states

    NASA Astrophysics Data System (ADS)

    Chen, Na; Quan, Dong-Xiao; Pei, Chang-Xing; Yang-Hong

    2015-02-01

    To realize practical wide-area quantum communication, a satellite-to-ground network with partially entangled states is developed in this paper. For efficiency and security reasons, the existing method of quantum communication in distributed wireless quantum networks with partially entangled states cannot be applied directly to the proposed quantum network. Based on this point, an efficient and secure quantum communication scheme with partially entangled states is presented. In our scheme, the source node performs teleportation only after an end-to-end entangled state has been established by entanglement swapping with partially entangled states. Thus, the security of quantum communication is guaranteed. The destination node recovers the transmitted quantum bit with the help of an auxiliary quantum bit and specially defined unitary matrices. Detailed calculations and simulation analyses show that the probability of successfully transferring a quantum bit in the presented scheme is high. In addition, the auxiliary quantum bit provides a heralded mechanism for successful communication. Based on the critical components that are presented in this article an efficient, secure, and practical wide-area quantum communication can be achieved. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072067 and 61372076), the 111 Project (Grant No. B08038), the Fund from the State Key Laboratory of Integrated Services Networks (Grant No. ISN 1001004), and the Fundamental Research Funds for the Central Universities (Grant Nos. K5051301059 and K5051201021).

  14. Performance of the ICAO standard core service modulation and coding techniques

    NASA Technical Reports Server (NTRS)

    Lodge, John; Moher, Michael

    1988-01-01

    Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.

  15. Digital Signal Processing For Low Bit Rate TV Image Codecs

    NASA Astrophysics Data System (ADS)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  16. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  17. VLSI for High-Speed Digital Signal Processing

    DTIC Science & Technology

    1994-09-30

    particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasner, Evan; Bearden, Sean; Žutić, Igor, E-mail: zigor@buffalo.edu

    Digital operation of lasers with injected spin-polarized carriers provides an improved operation over their conventional counterparts with spin-unpolarized carriers. Such spin-lasers can attain much higher bit rates, crucial for optical communication systems. The overall quality of a digital signal in these two types of lasers is compared using eye diagrams and quantified by improved Q-factors and bit-error-rates in spin-lasers. Surprisingly, an optimal performance of spin-lasers requires finite, not infinite, spin-relaxation times, giving a guidance for the design of future spin-lasers.

  19. Quantum cryptography with entangled photons

    PubMed

    Jennewein; Simon; Weihs; Weinfurter; Zeilinger

    2000-05-15

    By realizing a quantum cryptography system based on polarization entangled photon pairs we establish highly secure keys, because a single photon source is approximated and the inherent randomness of quantum measurements is exploited. We implement a novel key distribution scheme using Wigner's inequality to test the security of the quantum channel, and, alternatively, realize a variant of the BB84 protocol. Our system has two completely independent users separated by 360 m, and generates raw keys at rates of 400-800 bits/s with bit error rates around 3%.

  20. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  1. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    PubMed

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  2. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  3. Dispersion and dispersion slope compensation impact on high channel bit rate optical signal transmission degradation

    NASA Astrophysics Data System (ADS)

    Hamidine, Mahamadou; Yuan, Xiuhua

    2011-11-01

    In this article a numerical simulation is carried out on a single channel optical transmission system with channel bit rate greater than 40 Gb/s to investigate optical signal degradation due to the impact of dispersion and dispersion slope of both transmitting and dispersion compensating fibers. By independently varying the input signal power and the dispersion slope of both transmitting and dispersion compensating fibers of an optical link utilizing a channel bit rate of 86 Gb/s, a good quality factor (Q factor) is obtained with a dispersion slope compensation ratio change of +/-10% for a faithful transmission. With this ratio change a minimum Q factor of 16 dB is obtained in the presence of amplifier noise figure of 5 dB and fiber nonlinearities effects at input signal power of 5 dBm and 3 spans of 100 km standard single mode fiber with a dispersion (D) value of 17 ps/nm.km.

  4. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  5. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  6. Security of quantum key distribution with multiphoton components

    PubMed Central

    Yin, Hua-Lei; Fu, Yao; Mao, Yingqiu; Chen, Zeng-Bing

    2016-01-01

    Most qubit-based quantum key distribution (QKD) protocols extract the secure key merely from single-photon component of the attenuated lasers. However, with the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) QKD protocol, the unconditionally secure key can be extracted from the two-photon component by modifying the classical post-processing procedure in the BB84 protocol. Employing the merits of SARG04 QKD protocol and six-state preparation, one can extract secure key from the components of single photon up to four photons. In this paper, we provide the exact relations between the secure key rate and the bit error rate in a six-state SARG04 protocol with single-photon, two-photon, three-photon, and four-photon sources. By restricting the mutual information between the phase error and bit error, we obtain a higher secure bit error rate threshold of the multiphoton components than previous works. Besides, we compare the performances of the six-state SARG04 with other prepare-and-measure QKD protocols using decoy states. PMID:27383014

  7. An adaptive P300-based online brain-computer interface.

    PubMed

    Lenhardt, Alexander; Kaper, Matthias; Ritter, Helge J

    2008-04-01

    The P300 component of an event related potential is widely used in conjunction with brain-computer interfaces (BCIs) to translate the subjects intent by mere thoughts into commands to control artificial devices. A well known application is the spelling of words while selection of the letters is carried out by focusing attention to the target letter. In this paper, we present a P300-based online BCI which reaches very competitive performance in terms of information transfer rates. In addition, we propose an online method that optimizes information transfer rates and/or accuracies. This is achieved by an algorithm which dynamically limits the number of subtrial presentations, according to the subject's current online performance in real-time. We present results of two studies based on 19 different healthy subjects in total who participated in our experiments (seven subjects in the first and 12 subjects in the second one). In the first, study peak information transfer rates up to 92 bits/min with an accuracy of 100% were achieved by one subject with a mean of 32 bits/min at about 80% accuracy. The second experiment employed a dynamic classifier which enables the user to optimize bitrates and/or accuracies by limiting the number of subtrial presentations according to the current online performance of the subject. At the fastest setting, mean information transfer rates could be improved to 50.61 bits/min (i.e., 13.13 symbols/min). The most accurate results with 87.5% accuracy showed a transfer rate of 29.35 bits/min.

  8. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  9. Word-Synchronous Optical Sampling of Periodically Repeated OTDM Data Words for True Waveform Visualization

    NASA Astrophysics Data System (ADS)

    Benkler, Erik; Telle, Harald R.

    2007-06-01

    An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.

  10. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  11. Single and Multi-Pulse Low-Energy Conical Theta Pinch Inductive Pulsed Plasma Thruster Performance

    NASA Technical Reports Server (NTRS)

    Hallock, A. K.; Martin, A. K.; Polzin, K. A.; Kimberlin, A. C.; Eskridge, R. H.

    2013-01-01

    Impulse bits produced by conical theta-pinch inductive pulsed plasma thrusters possessing cone angles of 20deg, 38deg, and 60deg, were quantified for 500J/pulse operation by direct measurement using a hanging-pendulum thrust stand. All three cone angles were tested in single-pulse mode, with the 38deg model producing the highest impulse bits at roughly 1 mN-s operating on both argon and xenon propellants. A capacitor charging system, assembled to support repetitively-pulsed thruster operation, permitted testing of the 38deg thruster at a repetition-rate of 5 Hz at power levels of 0.9, 1.6, and 2.5 kW. The average thrust measured during multiple-pulse operation exceeded the value obtained when the single-pulse impulse bit is multiplied by the repetition rate.

  12. Optical Security System Based on the Biometrics Using Holographic Storage Technique with a Simple Data Format

    NASA Astrophysics Data System (ADS)

    Jun, An Won

    2006-01-01

    We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.

  13. Inexpensive programmable clock for a 12-bit computer

    NASA Technical Reports Server (NTRS)

    Vrancik, J. E.

    1972-01-01

    An inexpensive programmable clock was built for a digital PDP-12 computer. The instruction list includes skip on flag; clear the flag, clear the clock, and stop the clock; and preset the counter with the contents of the accumulator and start the clock. The clock counts at a rate determined by an external oscillator and causes an interrupt and sets a flag when a 12-bit overflow occurs. An overflow can occur after 1 to 4096 counts. The clock can be built for a total parts cost of less than $100 including power supply and I/O connector. Slight modification can be made to permit its use on larger machines (16 bit, 24 bit, etc.) and logic level shifting can be made to make it compatible with any computer.

  14. Servo-integrated patterned media by hybrid directed self-assembly.

    PubMed

    Xiao, Shuaigang; Yang, Xiaomin; Steiner, Philip; Hsu, Yautzong; Lee, Kim; Wago, Koichi; Kuo, David

    2014-11-25

    A hybrid directed self-assembly approach is developed to fabricate unprecedented servo-integrated bit-patterned media templates, by combining sphere-forming block copolymers with 5 teradot/in.(2) resolution capability, nanoimprint and optical lithography with overlay control. Nanoimprint generates prepatterns with different dimensions in the data field and servo field, respectively, and optical lithography controls the selective self-assembly process in either field. Two distinct directed self-assembly techniques, low-topography graphoepitaxy and high-topography graphoepitaxy, are elegantly integrated to create bit-patterned templates with flexible embedded servo information. Spinstand magnetic test at 1 teradot/in.(2) shows a low bit error rate of 10(-2.43), indicating fully functioning bit-patterned media and great potential of this approach for fabricating future ultra-high-density magnetic storage media.

  15. New PDC cutters improve drilling efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mensa-Wilmot, G.

    1997-10-27

    New polycrystalline diamond compact (PDC) cutters increase penetration rates and cumulative footage through improved abrasion, impact, interface strength, thermal stability, and fatigue characteristics. Studies of formation characterization, vibration analysis, hydraulic layouts, and bit selection continue to improve and expand PDC bit applications. The paper discusses development philosophy, performance characteristics and requirements, Types A, B, and C cutters, and combinations.

  16. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  17. Effect of digital scrambling on satellite communication links

    NASA Technical Reports Server (NTRS)

    Dessouky, K.

    1985-01-01

    Digital data scrambling has been considered for communication systems using NRZ symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System (TDRSS) links. Conclusions regarding the usefulness of scrambling are also given.

  18. New coding advances for deep space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.

    1987-01-01

    Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.

  19. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  20. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  1. Studying the rapid bioconversion of lignocellulosic sugars into ethanol using high cell density fermentations with cell recycle

    PubMed Central

    2014-01-01

    Background The Rapid Bioconversion with Integrated recycle Technology (RaBIT) process reduces capital costs, processing times, and biocatalyst cost for biochemical conversion of cellulosic biomass to biofuels by reducing total bioprocessing time (enzymatic hydrolysis plus fermentation) to 48 h, increasing biofuel productivity (g/L/h) twofold, and recycling biocatalysts (enzymes and microbes) to the next cycle. To achieve these results, RaBIT utilizes 24-h high cell density fermentations along with cell recycling to solve the slow/incomplete xylose fermentation issue, which is critical for lignocellulosic biofuel fermentations. Previous studies utilizing similar fermentation conditions showed a decrease in xylose consumption when recycling cells into the next fermentation cycle. Eliminating this decrease is critical for RaBIT process effectiveness for high cycle counts. Results Nine different engineered microbial strains (including Saccharomyces cerevisiae strains, Scheffersomyces (Pichia) stipitis strains, Zymomonas mobilis 8b, and Escherichia coli KO11) were tested under RaBIT platform fermentations to determine their suitability for this platform. Fermentation conditions were then optimized for S. cerevisiae GLBRCY128. Three different nutrient sources (corn steep liquor, yeast extract, and wheat germ) were evaluated to improve xylose consumption by recycled cells. Capacitance readings were used to accurately measure viable cell mass profiles over five cycles. Conclusion The results showed that not all strains are capable of effectively performing the RaBIT process. Acceptable performance is largely correlated to the specific xylose consumption rate. Corn steep liquor was found to reduce the deleterious impacts of cell recycle and improve specific xylose consumption rates. The viable cell mass profiles indicated that reduction in specific xylose consumption rate, not a drop in viable cell mass, was the main cause for decreasing xylose consumption. PMID:24847379

  2. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  3. Bandwidth reduction for video-on-demand broadcasting using secondary content insertion

    NASA Astrophysics Data System (ADS)

    Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy

    2005-01-01

    An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.

  4. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  5. Cepstral domain modification of audio signals for data embedding: preliminary results

    NASA Astrophysics Data System (ADS)

    Gopalan, Kaliappan

    2004-06-01

    A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5 percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the vicinity of a formant was modified. In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible stego audio with low BER.

  6. Design of high-speed burst mode clock and data recovery IC for passive optical network

    NASA Astrophysics Data System (ADS)

    Yan, Minhui; Hong, Xiaobin; Huang, Wei-Ping; Hong, Jin

    2005-09-01

    Design of a high bit rate burst mode clock and data recovery (BMCDR) circuit for gigabit passive optical networks (GPON) is described. A top-down design flow is established and some of the key issues related to the behavioural level modeling are addressed in consideration for the complexity of the BMCDR integrated circuit (IC). Precise implementation of Simulink behavioural model accounting for the saturation of frequency control voltage is therefore developed for the BMCDR, and the parameters of the circuit blocks can be readily adjusted and optimized based on the behavioural model. The newly designed BMCDR utilizes the 0.18um standard CMOS technology and is shown to be capable of operating at bit rate of 2.5Gbps, as well as the recovery time of one bit period in our simulation. The developed behaviour model is verified by comparing with the detailed circuit simulation.

  7. Experimental study of entanglement evolution in the presence of bit-flip and phase-shift noises

    NASA Astrophysics Data System (ADS)

    Liu, Xia; Cao, Lian-Zhen; Zhao, Jia-Qiang; Yang, Yang; Lu, Huai-Xin

    2017-10-01

    Because of its important role both in fundamental theory and applications in quantum information, evolution of entanglement in a quantum system under decoherence has attracted wide attention in recent years. In this paper, we experimentally generate a high-fidelity maximum entangled two-qubit state and present an experimental study of the decoherence properties of entangled pair of qubits at collective (non-collective) bit-flip and phase-shift noises. The results shown that entanglement decreasing depends on the type of the noises (collective or non-collective and bit-flip or phase-shift) and the number of qubits which are subject to the noise. When two qubits are depolarized passing through non-collective noisy channel, the decay rate is larger than that depicted for the collective noise. When two qubits passing through depolarized noisy channel, the decay rate is larger than that depicted for one qubit.

  8. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  9. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  10. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamrick, Todd

    2011-01-01

    Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less

  11. Estimating Hardness from the USDC Tool-Bit Temperature Rise

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Sherrit, Stewart

    2008-01-01

    A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.

  12. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  13. Optimization of Wireless Transceivers under Processing Energy Constraints

    NASA Astrophysics Data System (ADS)

    Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert

    2017-09-01

    Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.

  14. Development of a high capacity bubble domain memory element and related epitaxial garnet materials for application in spacecraft data recorders. Item 1: Development of a high capacity memory element

    NASA Technical Reports Server (NTRS)

    Besser, P. J.

    1977-01-01

    Several versions of the 100K bit chip, which is configured as a single serial loop, were designed, fabricated and evaluated. Design and process modifications were introduced into each succeeding version to increase device performance and yield. At an intrinsic field rate of 150 KHz the final design operates from -10 C to +60 C with typical bias margins of 12 and 8 percent, respectively, for continuous operation. Asynchronous operation with first bit detection on start-up produces essentially the same margins over the temperature range. Cost projections made from fabrication yield runs on the 100K bit devices indicate that the memory element cost will be less than 10 millicents/bit in volume production.

  15. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  16. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  17. Heat-assisted magnetic recording of bit-patterned media beyond 10 Tb/in2

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk

    2016-03-01

    The limits of areal storage density that is achievable with heat-assisted magnetic recording are unknown. We addressed this central question and investigated the areal density of bit-patterned media. We analyzed the detailed switching behavior of a recording bit under various external conditions, allowing us to compute the bit error rate of a write process (shingled and conventional) for various grain spacings, write head positions, and write temperatures. Hence, we were able to optimize the areal density yielding values beyond 10 Tb/in2. Our model is based on the Landau-Lifshitz-Bloch equation and uses hard magnetic recording grains with a 5-nm diameter and 10-nm height. It assumes a realistic distribution of the Curie temperature of the underlying material, grain size, as well as grain and head position.

  18. Effects of drilling parameters in numerical simulation to the bone temperature elevation

    NASA Astrophysics Data System (ADS)

    Akhbar, Mohd Faizal Ali; Malik, Mukhtar; Yusoff, Ahmad Razlan

    2018-04-01

    Drilling into the bone can produce significant amount of heat which can cause bone necrosis. Understanding the drilling parameters influence to the heat generation is necessary to prevent thermal necrosis to the bone. The aim of this study is to investigate the influence of drilling parameters on bone temperature elevation. Drilling simulations of various combinations of drill bit diameter, rotational speed and feed rate were performed using finite element software DEFORM-3D. Full-factorial design of experiments (DOE) and two way analysis of variance (ANOVA) were utilised to examine the effect of drilling parameters and their interaction influence on the bone temperature. The maximum bone temperature elevation of 58% was demonstrated within the range in this study. Feed rate was found to be the main parameter to influence the bone temperature elevation during the drilling process followed by drill diameter and rotational speed. The interaction between drill bit diameter and feed rate was found to be significantly influence the bone temperature. It is discovered that the use of low rotational speed, small drill bit diameter and high feed rate are able to minimize the elevation of bone temperature for safer surgical operations.

  19. Cross-reactivity between methylisothiazolinone, octylisothiazolinone and benzisothiazolinone using a modified local lymph node assay.

    PubMed

    Schwensen, J F; Menné Bonefeld, C; Zachariae, C; Agerbeck, C; Petersen, T H; Geisler, C; Bollmann, U E; Bester, K; Johansen, J D

    2017-01-01

    In the light of the exceptionally high rates of contact allergy to the preservative methylisothiazolinone (MI), information about cross-reactivity between MI, octylisothiazolinone (OIT) and benzisothiazolinone (BIT) is needed. To study cross-reactivity between MI and OIT, and between MI and BIT. Immune responses to MI, OIT and BIT were studied in vehicle and MI-sensitized female CBA mice by a modified local lymph node assay. The inflammatory response was measured by ear thickness, cell proliferation of CD4 + and CD8 + T cells, and CD19 + B cells in the auricular draining lymph nodes. MI induced significant, strong, concentration-dependent immune responses in the draining lymph nodes following a sensitization phase of three consecutive days. Groups of MI-sensitized mice were challenged on day 23 with 0·4% MI, 0·7% OIT and 1·9% BIT - concentrations corresponding to their individual EC3 values. No statistically significant difference in proliferation of CD4 + and CD8 + T cells was observed between mice challenged with MI compared with mice challenged with BIT and OIT. The data indicate cross-reactivity between MI, OIT and BIT, when the potency of the chemical was taken into account in choice of challenge concentration. This means that MI-sensitized individuals may react to OIT and BIT if exposed to sufficient concentrations. © 2016 British Association of Dermatologists.

  20. Improved P300 speller performance using electrocorticography, spectral features, and natural language processing.

    PubMed

    Speier, William; Fried, Itzhak; Pouratian, Nader

    2013-07-01

    The P300 speller is a system designed to restore communication to patients with advanced neuromuscular disorders. This study was designed to explore the potential improvement from using electrocorticography (ECoG) compared to the more traditional usage of electroencephalography (EEG). We tested the P300 speller on two epilepsy patients with temporary subdural electrode arrays over the occipital and temporal lobes respectively. We then performed offline analysis to determine the accuracy and bit rate of the system and integrated spectral features into the classifier and used a natural language processing (NLP) algorithm to further improve the results. The subject with the occipital grid achieved an accuracy of 82.77% and a bit rate of 41.02, which improved to 96.31% and 49.47 respectively using a language model and spectral features. The temporal grid patient achieved an accuracy of 59.03% and a bit rate of 18.26 with an improvement to 75.81% and 27.05 respectively using a language model and spectral features. Spatial analysis of the individual electrodes showed best performance using signals generated and recorded near the occipital pole. Using ECoG and integrating language information and spectral features can improve the bit rate of a P300 speller system. This improvement is sensitive to the electrode placement and likely depends on visually evoked potentials. This study shows that there can be an improvement in BCI performance when using ECoG, but that it is sensitive to the electrode location. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Active tracking system for visible light communication using a GaN-based micro-LED and NRZ-OOK.

    PubMed

    Lu, Zhijian; Tian, Pengfei; Chen, Hong; Baranowski, Izak; Fu, Houqiang; Huang, Xuanqi; Montes, Jossue; Fan, Youyou; Wang, Hongyi; Liu, Xiaoyan; Liu, Ran; Zhao, Yuji

    2017-07-24

    Visible light communication (VLC) holds the promise of a high-speed wireless network for indoor applications and competes with 5G radio frequency (RF) system. Although the breakthrough of gallium nitride (GaN) based micro-light-emitting-diodes (micro-LEDs) increases the -3dB modulation bandwidth exceptionally from tens of MHz to hundreds of MHz, the light collected onto a fast photo receiver drops dramatically, which determines the signal to noise ratio (SNR) of VLC. To fully implement the practical high data-rate VLC link enabled by a GaN-based micro-LED, it requires focusing optics and a tracking system. In this paper, we demonstrate an active on-chip tracking system for VLC using a GaN-based micro-LED and none-return-to-zero on-off keying (NRZ-OOK). Using this novel technique, the field of view (FOV) was enlarged to 120° and data rates up to 600 Mbps at a bit error rate (BER) of 2.1×10 -4 were achieved without manual focusing. This paper demonstrates the establishment of a VLC physical link that shows enhanced communication quality by orders of magnitude, making it optimized for practical communication applications.

  2. Testing fine motor coordination via telehealth: effects of video characteristics on reliability and validity.

    PubMed

    Hoenig, Helen M; Amis, Kristopher; Edmonds, Carol; Morgan, Michelle S; Landerman, Lawrence; Caves, Kevin

    2017-01-01

    Background There is limited research about the effects of video quality on the accuracy of assessments of physical function. Methods A repeated measures study design was used to assess reliability and validity of the finger-nose test (FNT) and the finger-tapping test (FTT) carried out with 50 veterans who had impairment in gross and/or fine motor coordination. Videos were scored by expert raters under eight differing conditions, including in-person, high definition video with slow motion review and standard speed videos with varying bit rates and frame rates. Results FTT inter-rater reliability was excellent with slow motion video (ICC 0.98-0.99) and good (ICC 0.59) under the normal speed conditions. Inter-rater reliability for FNT 'attempts' was excellent (ICC 0.97-0.99) for all viewing conditions; for FNT 'misses' it was good to excellent (ICC 0.89) with slow motion review but substantially worse (ICC 0.44) on the normal speed videos. FTT criterion validity (i.e. compared to slow motion review) was excellent (β = 0.94) for the in-person rater and good ( β = 0.77) on normal speed videos. Criterion validity for FNT 'attempts' was excellent under all conditions ( r ≥ 0.97) and for FNT 'misses' it was good to excellent under all conditions ( β = 0.61-0.81). Conclusions In general, the inter-rater reliability and validity of the FNT and FTT assessed via video technology is similar to standard clinical practices, but is enhanced with slow motion review and/or higher bit rate.

  3. Security of two-state and four-state practical quantum bit-commitment protocols

    NASA Astrophysics Data System (ADS)

    Loura, Ricardo; Arsenović, Dušan; Paunković, Nikola; Popović, Duška B.; Prvanović, Slobodan

    2016-12-01

    We study cheating strategies against a practical four-state quantum bit-commitment protocol [A. Danan and L. Vaidman, Quant. Info. Proc. 11, 769 (2012)], 10.1007/s11128-011-0284-4 and its two-state variant [R. Loura et al., Phys. Rev. A 89, 052336 (2014)], 10.1103/PhysRevA.89.052336 when the underlying quantum channels are noisy and the cheating party is constrained to using single-qubit measurements only. We show that simply inferring the transmitted photons' states by using the Breidbart basis, optimal for ambiguous (minimum-error) state discrimination, does not directly produce an optimal cheating strategy for this bit-commitment protocol. We introduce a strategy, based on certain postmeasurement processes and show it to have better chances at cheating than the direct approach. We also study to what extent sending forged geographical coordinates helps a dishonest party in breaking the binding security requirement. Finally, we investigate the impact of imperfect single-photon sources in the protocols. Our study shows that, in terms of the resources used, the four-state protocol is advantageous over the two-state version. The analysis performed can be straightforwardly generalized to any finite-qubit measurement, with the same qualitative results.

  4. Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits

    PubMed Central

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-01-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least kBT ln(2) of heat be dissipated from the memory into the environment, where kB is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between “information thermodynamics” and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519

  5. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  6. Percussive Augmenter of Rotary Drills (PARoD)

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Hasenoehrl, Jennifer; Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Chang, Zensheu; Ostlund, Patrick; Aldrich, Jack

    2013-01-01

    Increasingly, NASA exploration mission objectives include sample acquisition tasks for in-situ analysis or for potential sample return to Earth. To address the requirements for samplers that could be operated at the conditions of the various bodies in the solar system, a piezoelectric actuated percussive sampling device was developed that requires low preload (as low as 10 N) which is important for operation at low gravity. This device can be made as light as 400 g, can be operated using low average power, and can drill rocks as hard as basalt. Significant improvement of the penetration rate was achieved by augmenting the hammering action by rotation and use of a fluted bit to provide effective cuttings removal. Generally, hammering is effective in fracturing drilled media while rotation of fluted bits is effective in cuttings removal. To benefit from these two actions, a novel configuration of a percussive mechanism was developed to produce an augmenter of rotary drills. The device was called Percussive Augmenter of Rotary Drills (PARoD). A breadboard PARoD was developed with a 6.4 mm (0.25 in) diameter bit and was demonstrated to increase the drilling rate of rotation alone by 1.5 to over 10 times. The test results of this configuration were published in a previous publication. Further, a larger PARoD breadboard with a 50.8 mm (2.0 in) diameter bit was developed and tested. This paper presents the design, analysis and test results of the large diameter bit percussive augmenter.

  7. Preliminary design for a standard 10 sup 7 bit Solid State Memory (SSM)

    NASA Technical Reports Server (NTRS)

    Hayes, P. J.; Howle, W. M., Jr.; Stermer, R. L., Jr.

    1978-01-01

    A modular concept with three separate modules roughly separating bubble domain technology, control logic technology, and power supply technology was employed. These modules were respectively the standard memory module (SMM), the data control unit (DCU), and power supply module (PSM). The storage medium was provided by bubble domain chips organized into memory cells. These cells and the circuitry for parallel data access to the cells make up the SMM. The DCU provides a flexible serial data interface to the SMM. The PSM provides adequate power to enable one DCU and one SMM to operate simultaneously at the maximum data rate. The SSM was designed to handle asynchronous data rates from dc to 1.024 Mbs with a bit error rate less than 1 error in 10 to the eight power bits. Two versions of the SSM, a serial data memory and a dual parallel data memory were specified using the standard modules. The SSM specification includes requirements for radiation hardness, temperature and mechanical environments, dc magnetic field emission and susceptibility, electromagnetic compatibility, and reliability.

  8. Blue Laser Diode Enables Underwater Communication at 12.4 Gbps

    PubMed Central

    Wu, Tsai-Chen; Chi, Yu-Chieh; Wang, Huai-Yung; Tsai, Cheng-Ting; Lin, Gong-Ru

    2017-01-01

    To enable high-speed underwater wireless optical communication (UWOC) in tap-water and seawater environments over long distances, a 450-nm blue GaN laser diode (LD) directly modulated by pre-leveled 16-quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data was employed to implement its maximal transmission capacity of up to 10 Gbps. The proposed UWOC in tap water provided a maximal allowable communication bit rate increase from 5.2 to 12.4 Gbps with the corresponding underwater transmission distance significantly reduced from 10.2 to 1.7 m, exhibiting a bit rate/distance decaying slope of −0.847 Gbps/m. When conducting the same type of UWOC in seawater, light scattering induced by impurities attenuated the blue laser power, thereby degrading the transmission with a slightly higher decay ratio of 0.941 Gbps/m. The blue LD based UWOC enables a 16-QAM OFDM bit rate of up to 7.2 Gbps for transmission in seawater more than 6.8 m. PMID:28094309

  9. Development of the Low-cost Analog-to-Digital Converter (for nuclear physics experiments) with PC sound card

    NASA Astrophysics Data System (ADS)

    Sugihara, Kenkoh

    2009-10-01

    A low-cost ADC (Analogue-to-Digital Converter) with shaping embedded for undergraduate physics laboratory is developed using a home made circuit and a PC sound card. Even though an ADC is needed as an essential part of an experimental set up, commercially available ones are very expensive and are scarce for undergraduate laboratory experiments. The system that is developed from the present work is designed for a gamma-ray spectroscopy laboratory with NaI(Tl) counters, but not limited. For this purpose, the system performance is set to sampling rate of 1-kHz with 10-bit resolution using a typical PC sound card with 41-kHz or higher sampling rate and 16-bit resolution ADC with an addition of a shaping circuit. Details of the system and the status of development will be presented. Ping circuit and PC soundcard as typical PC sound card has 41.1kHz or heiger sampling rate and 16bit resolution ADCs. In the conference details of the system and the status of development will be presented.

  10. The selection of Lorenz laser parameters for transmission in the SMF 3rd transmission window

    NASA Astrophysics Data System (ADS)

    Gajda, Jerzy K.; Niesterowicz, Andrzej; Zeglinski, Grzegorz

    2003-10-01

    The work presents simulation of transmission line results with the fiber standard ITU-T G.652. The parameters of Lorenz laser decide about electrical signal parameters like eye pattern, jitter, BER, S/N, Q-factor, scattering diagram. For a short line lasers with linewidth larger than 100MHz can be used. In the paper cases for 10 Gbit/s and 40 Gbit/s transmission and the fiber length 30km, 50km, and 70km are calculated. The average open eye patterns were 1*10-5-120*10-5. The Q factor was 10-23dB. In calcuations the bit error rate (BER) was 10-40-10-4. If the bandwidth of Lorenz laser increases from 10 MHz to 500MHz a distance of transmission decrease from 70km to 30km. Very important for transmission distance is a rate bit of transmitter. If a bit rate increase from 10Gbit/s to 40 Gbit/s, the transmission distance for the signal mode fiber G.652 will decrease from 70km to 5km.

  11. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  12. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  13. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  14. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis; Alan Black; Homer Robertson

    2006-03-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ultra-high rotary speed drilling system is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 October 2004 through 30 September 2005. Additionally, research activity from 1 October 2005 through 28 February 2006 is included in this report: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek designed and planned Phase I bench scale experiments. Some difficulties continue in obtaining ultra-high speed motors. Improvements have been made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs have been provided to vendors for production. A more consistent product is required to minimize the differences in bit performance. A test matrix for the final core bit testing program has been completed. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. (4) Significant testing has been performed on nine different rocks. (5) Bit balling has been observed on some rock and seems to be more pronounces at higher rotational speeds. (6) Preliminary analysis of data has been completed and indicates that decreased specific energy is required as the rotational speed increases (Task 4). This data analysis has been used to direct the efforts of the final testing for Phase I (Task 5). (7) Technology transfer (Task 6) has begun with technical presentations to the industry (see Judzis).« less

  15. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  16. Experimental demonstration of robust entanglement distribution over reciprocal noisy channels assisted by a counter-propagating classical reference light.

    PubMed

    Ikuta, Rikizo; Nozaki, Shota; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki

    2017-07-06

    Embedding a quantum state in a decoherence-free subspace (DFS) formed by multiple photons is one of the promising methods for robust entanglement distribution of photonic states over collective noisy channels. In practice, however, such a scheme suffers from a low efficiency proportional to transmittance of the channel to the power of the number of photons forming the DFS. The use of a counter-propagating coherent pulse can improve the efficiency to scale linearly in the channel transmission, but it achieves only protection against phase noises. Recently, it was theoretically proposed [Phys. Rev. A 87, 052325(2013)] that the protection against bit-flip noises can also be achieved if the channel has a reciprocal property. Here we experimentally demonstrate the proposed scheme to distribute polarization-entangled photon pairs against a general collective noise including the bit flip noise and the phase noise. We observed an efficient sharing rate scaling while keeping a high quality of the distributed entangled state. Furthermore, we show that the method is applicable not only to the entanglement distribution but also to the transmission of arbitrary polarization states of a single photon.

  17. 24-Hour Relativistic Bit Commitment.

    PubMed

    Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo

    2016-09-30

    Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.

  18. Efficient heralding of O-band passively spatial-multiplexed photons for noise-tolerant quantum key distribution.

    PubMed

    Liu, Mao Tong; Lim, Han Chuen

    2014-09-22

    When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.

  19. Prefixed-threshold real-time selection method in free-space quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong

    2018-03-01

    Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.

  20. Channel Modeling

    NASA Astrophysics Data System (ADS)

    Schmitz, Arne; Schinnenburg, Marc; Gross, James; Aguiar, Ana

    For any communication system the Signal-to-Interference-plus-Noise-Ratio of the link is a fundamental metric. Recall (cf. Chapter 9) that the SINR is defined as the ratio between the received power of the signal of interest and the sum of all "disturbing" power sources (i.e. interference and noise). From information theory it is known that a higher SINR increases the maximum possible error-free transmission rate (referred to as Shannon capacity [417] of any communication system and vice versa). Conversely, the higher the SINR, the lower will be the bit error rate in practical systems. While one aspect of the SINR is the sum of all distracting power sources, another issue is the received power. This depends on the transmitted power, the used antennas, possibly on signal processing techniques and ultimately on the channel gain between transmitter and receiver.

  1. How Good Is Good Enough?

    ERIC Educational Resources Information Center

    Wiggins, Grant

    2014-01-01

    Education has a long-standing practice of turning worthwhile learning goals into lists of bits. One might even say that this practice is the original sin in curriculum design: take a complex whole, divide it into small pieces, string those together in a rigid sequence of instruction and testing, and call completion of this sequence…

  2. Contextualising Vocational Knowledge: A Theoretical Framework and Illustrations from Culinary Education

    ERIC Educational Resources Information Center

    Heusdens, W. T.; Bakker, A.; Baartman, L. K. J.; De Bruijn, E.

    2016-01-01

    The nature of knowledge in vocational education is often described in dichotomies such as theory versus practice or general versus specific. Although different scholars now acknowledge that vocational knowledge is more than putting bits of theoretical and practical knowledge together, it is still unclear how vocational knowledge should be…

  3. To Love Your Country as Your Mother: Patriotism after 9/11

    ERIC Educational Resources Information Center

    Wingo, Ajume

    2007-01-01

    The practical power of appeals to patriotism implies that patriotism in one form or another is here to stay. As such, arguments for the repudiation of patriotism cannot avoid seeming a bit utopian or ethereal. Practically speaking, we cannot repudiate patriotism and still have effective functioning states. To that end, political philosophers…

  4. Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors

    NASA Astrophysics Data System (ADS)

    Freitas, P. P.

    Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.

  5. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  6. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  7. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  8. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  9. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  10. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  11. A high-speed digital signal processor for atmospheric radar, part 7.3A

    NASA Technical Reports Server (NTRS)

    Brosnahan, J. W.; Woodard, D. M.

    1984-01-01

    The Model SP-320 device is a monolithic realization of a complex general purpose signal processor, incorporating such features as a 32-bit ALU, a 16-bit x 16-bit combinatorial multiplier, and a 16-bit barrel shifter. The SP-320 is designed to operate as a slave processor to a host general purpose computer in applications such as coherent integration of a radar return signal in multiple ranges, or dedicated FFT processing. Presently available is an I/O module conforming to the Intel Multichannel interface standard; other I/O modules will be designed to meet specific user requirements. The main processor board includes input and output FIFO (First In First Out) memories, both with depths of 4096 W, to permit asynchronous operation between the source of data and the host computer. This design permits burst data rates in excess of 5 MW/s.

  12. Optical transmission modules for multi-channel superconducting quantum interference device readouts.

    PubMed

    Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong

    2013-12-01

    We developed an optical transmission module consisting of 16-channel analog-to-digital converter (ADC), digital-noise filter, and one-line serial transmitter, which transferred Superconducting Quantum Interference Device (SQUID) readout data to a computer by a single optical cable. A 16-channel ADC sent out SQUID readouts data with 32-bit serial data of 8-bit channel and 24-bit voltage data at a sample rate of 1.5 kSample/s. A digital-noise filter suppressed digital noises generated by digital clocks to obtain SQUID modulation as large as possible. One-line serial transmitter reformed 32-bit serial data to the modulated data that contained data and clock, and sent them through a single optical cable. When the optical transmission modules were applied to 152-channel SQUID magnetoencephalography system, this system maintained a field noise level of 3 fT/√Hz @ 100 Hz.

  13. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  14. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  15. Will available bit rate (ABR) services give us the capability to offer virtual LANs over wide-area ATM networks?

    NASA Astrophysics Data System (ADS)

    Ferrandiz, Ana; Scallan, Gavin

    1995-10-01

    The available bit rate (ABR) service allows connections to exceed their negotiated data rates during the life of the connections when excess capacity is available in the network. These connections are subject to flow control from the network in the event of network congestion. The ability to dynamically adjust the data rate of the connection can provide improved utilization of the network and be a valuable service to end users. ABR type service is therefore appropriate for the transmission of bursty LAN traffic over a wide area network in a manner that is more efficient and cost effective than allocating bandwdith at the peak cell rate. This paper describes the ABR service and discusses if it is realistic to operate a LAN like service over a wide area using ABR.

  16. Real-time motion-based H.263+ frame rate control

    NASA Astrophysics Data System (ADS)

    Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay

    1998-12-01

    Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.

  17. SEMICONDUCTOR INTEGRATED CIRCUITS A 10-bit 200-kS/s SAR ADC IP core for a touch screen SoC

    NASA Astrophysics Data System (ADS)

    Xingyuan, Tong; Yintang, Yang; Zhangming, Zhu; Wenfang, Sheng

    2010-10-01

    Based on a 5 MSBs (most-significant-bits)-plus-5 LSBs (least-significant-bits) C-R hybrid D/A conversion and low-offset pseudo-differential comparison approach, with capacitor array axially symmetric layout topology and resistor string low gradient mismatch placement method, an 8-channel 10-bit 200-kS/s SAR ADC (successive-approximation-register analog-to-digital converter) IP core for a touch screen SoC (system-on-chip) is implemented in a 0.18 μm 1P5M CMOS logic process. Design considerations for the touch screen SAR ADC are included. With a 1.8 V power supply, the DNL (differential non-linearity) and INL (integral non-linearity) of this converter are measured to be about 0.32 LSB and 0.81 LSB respectively. With an input frequency of 91 kHz at 200-kS/s sampling rate, the spurious-free dynamic range and effective-number-of-bits are measured to be 63.2 dB and 9.15 bits respectively, and the power is about 136 μW. This converter occupies an area of about 0.08 mm2. The design results show that it is very suitable for touch screen SoC applications.

  18. Ku-band signal design study. [for space shuttle orbiter communication links

    NASA Technical Reports Server (NTRS)

    Lindsey, W. L.; Woo, K. T.

    1977-01-01

    The acquisition/tracking performance of a practical squaring loop in which the times two multiplier is mechanized as a limiter/multiplier combination is evaluated. This squaring approach serves to produce the absolute value of the arriving signal as opposed to the perfect square law action which is required in order to render acquisition and tracking performance equivalent to that of a Costas loop. The Ku-Band orbiter signal design for the forward link is assessed. Acquisition time results and acquisition and tracking thresholds are summarized. A tradeoff study which pertains to bit synchronization techniques for the high rate Ku-Band channel is included and an optimum selection is made based upon the appropriate design constraints.

  19. Applications of NTNU/SINTEF Drillability Indices in Hard Rock Tunneling

    NASA Astrophysics Data System (ADS)

    Zare, S.; Bruland, A.

    2013-01-01

    Drillability indices, i.e., the Drilling Rate Index™ (DRI), Bit Wear Index™ (BWI), Cutter Life Index™ (CLI), and Vickers Hardness Number Rock (VHNR), are indirect measures of rock drillability. These indices are recognized as providing practical characterization of rock properties used in the Norwegian University of Science and Technology (NTNU) time and cost prediction models available for hard rock tunneling and surface excavation. The tests form the foundation of various hard rock equipment capacity and performance prediction methods. In this paper, application of the tests for tunnel boring machine (TBM) and drill and blast (D&B) tunneling is investigated and the impact of the indices on excavation time and costs is presented.

  20. Digital scrambling for shuttle communication links: Do drawbacks outweigh advantages?

    NASA Technical Reports Server (NTRS)

    Dessouky, K.

    1985-01-01

    Digital data scrambling has been considered for communication systems using NRZ (non-return to zero) symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System links. Conclusions regarding the usefulness of scrambling are also given.

  1. Demonstration of low power penalty of silicon Mach-Zehnder modulator in long-haul transmission.

    PubMed

    Yi, Huaxiang; Long, Qifeng; Tan, Wei; Li, Li; Wang, Xingjun; Zhou, Zhiping

    2012-12-03

    We demonstrate error-free 80km transmission by a silicon carrier-depletion Mach-Zehnder modulator at 10Gbps and the power penalty is as low as 1.15dB. The devices were evaluated through the bit-error-rate characterizations under the system-level analysis. The silicon Mach-Zehnder modulator was also analyzed comparatively with a lithium niobate Mach-Zehnder modulator in back-to-back transmission and long-haul transmission, respectively, and verified the negative chirp parameter of the silicon modulator through the experiment. The result of low power penalty indicates a practical application for the silicon modulator in the middle- or long-distance transmission systems.

  2. Measurement-Device-Independent Quantum Cryptography

    NASA Astrophysics Data System (ADS)

    Tang, Zhiyuan

    Quantum key distribution (QKD) enables two legitimate parties to share a secret key even in the presence of an eavesdropper. The unconditional security of QKD is based on the fundamental laws of quantum physics. Original security proofs of QKD are based on a few assumptions, e.g., perfect single photon sources and perfect single-photon detectors. However, practical implementations of QKD systems do not fully comply with such assumptions due to technical limitations. The gap between theory and implementations leads to security loopholes in most QKD systems, and several attacks have been launched on sophisticated QKD systems. Particularly, the detectors have been found to be the most vulnerable part of QKD. Much effort has been put to build side-channel-free QKD systems. Solutions such as security patches and device-independent QKD have been proposed. However, the former are normally ad-hoc, and cannot close unidentified loopholes. The latter, while having the advantages of removing all assumptions on devices, is impractical to implement today. Measurement-device-independent QKD (MDI-QKD) turns out to be a promising solution to the security problem of QKD. In MDI-QKD, all security loopholes, including those yet-to-be discovered, have been removed from the detectors, the most critical part in QKD. In this thesis, we investigate issues related to the practical implementation and security of MDI-QKD. We first present a demonstration of polarization-encoding MDI-QKD. Taking finite key effect into account, we achieve a secret key rate of 0.005 bit per second (bps) over 10 km spooled telecom fiber, and a 1600-bit key is distributed. This work, together with other demonstrations, shows the practicality of MDI-QKD. Next we investigate a critical assumption of MDI-QKD: perfect state preparation. We apply the loss-tolerant QKD protocol and adapt it to MDI-QKD to quantify information leakage due to imperfect state preparation. We then present an experimental demonstration of MDI-QKD over 10 km and 40 km of spooled fiber, which for the first time considers the impact of inaccurate polarization state preparation on the secret key rate. This would not have been possible under previous security proofs, given the same amount of state preparation flaws.

  3. Line-of-Sight Data Link Test Set

    DTIC Science & Technology

    1976-06-01

    spheric layer model for layer refraction or a surface reflectivity model for ground reflection paths. Measurement of the channel impulse response...the model is exercised over a path consisting of only a constant direct component. The test would consist of measuring the modem demodulator bit...direct and a fading direct component. The test typically would consist of measuring the bit error-rate over a range of average signal-to-noise

  4. The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results

    NASA Technical Reports Server (NTRS)

    Katz, J.

    1982-01-01

    The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.

  5. Classical and quantum communication without a shared reference frame.

    PubMed

    Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W

    2003-07-11

    We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.

  6. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  7. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  8. Tracking and data system support for the Mariner Mars 1971 mission. Prelaunch phase through first trajectory correction maneuver, volume 1

    NASA Technical Reports Server (NTRS)

    Laeser, R. P.; Textor, G. P.; Kelly, L. B.; Kelly, M.

    1972-01-01

    The DSN command system provided the capability to enter commands in a computer at the deep space stations for transmission to the spacecraft. The high-rate telemetry system operated at 16,200 bits/sec. This system will permit return to DSS 14 of full-resolution television pictures from the spacecraft tape recorder, plus the other science experiment data, during the two playback periods of each Goldstone pass planned for each corresponding orbit. Other features included 4800 bits/sec modem high-speed data lines from all deep space stations to Space Flight Operations Facility (SFOF) and the Goddard Space Flight Center, as well as 50,000 bits/sec wideband data lines from DSS 14 to the SFOF, thus providing the capability for data flow of two 16,200 bits/sec high-rate telemetry data streams in real time. The TDS performed prelaunch training and testing and provided support for the Mariner Mars 1971/Mission Operations System training and testing. The facilities of the ETR, DSS 71, and stations of the MSFN provided flight support coverage at launch and during the near-earth phase. The DSSs 12, 14, 41, and 51 of the DSN provided the deep space phase support from 30 May 1971 through 4 June 1971.

  9. Towards a ternary NIRS-BCI: single-trial classification of verbal fluency task, Stroop task and unconstrained rest

    NASA Astrophysics Data System (ADS)

    Schudlo, Larissa C.; Chau, Tom

    2015-12-01

    Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.

  10. Chaos-based wireless communication resisting multipath effects.

    PubMed

    Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso

    2017-09-01

    In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.

  11. Chaos-based wireless communication resisting multipath effects

    NASA Astrophysics Data System (ADS)

    Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso

    2017-09-01

    In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.

  12. Methods to ensure optimal off-bottom and drill bit distance under pellet impact drilling

    NASA Astrophysics Data System (ADS)

    Kovalyov, A. V.; Isaev, Ye D.; Vagapov, A. R.; Urnish, V. V.; Ulyanova, O. S.

    2016-09-01

    The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rock for various purposes. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The paper presents the survey of methods ensuring an optimal off-bottom and a drill bit distance. The analysis of methods shows that the issue is topical and requires further research.

  13. Critical side channel effects in random bit generation with multiple semiconductor lasers in a polarization-based quantum key distribution system.

    PubMed

    Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju

    2017-08-21

    Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.

  14. 2 GHz clock quantum key distribution over 260 km of standard telecom fiber.

    PubMed

    Wang, Shuang; Chen, Wei; Guo, Jun-Fu; Yin, Zhen-Qiang; Li, Hong-Wei; Zhou, Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2012-03-15

    We report a demonstration of quantum key distribution (QKD) over a standard telecom fiber exceeding 50 dB in loss and 250 km in length. The differential phase shift QKD protocol was chosen and implemented with a 2 GHz system clock rate. By careful optimization of the 1 bit delayed Faraday-Michelson interferometer and the use of the superconducting single photon detector (SSPD), we achieved a quantum bit error rate below 2% when the fiber length was no more than 205 km, and of 3.45% for a 260 km fiber with 52.9 dB loss. We also improved the quantum efficiency of SSPD to obtain a high key rate for 50 km length.

  15. High-resolution LCOS microdisplay with sub-kHz frame rate for high performance, high precision 3D sensor

    NASA Astrophysics Data System (ADS)

    Lazarev, Grigory; Bonifer, Stefanie; Engel, Philip; Höhne, Daniel; Notni, Gunther

    2017-06-01

    We report about the implementation of the liquid crystal on silicon (LCOS) microdisplay with 1920 by 1080 resolution and 720 Hz frame rate. The driving solution is FPGA-based. The input signal is converted from the ultrahigh-resolution HDMI 2.0 signal into HD frames, which follow with the specified 720 Hz frame rate. Alternatively the signal is generated directly on the FPGA with built-in pattern generator. The display is showing switching times below 1.5 ms for the selected working temperature. The bit depth of the addressed image achieves 8 bit within each frame. The microdisplay is used in the fringe projection-based 3D sensing system, implemented by Fraunhofer IOF.

  16. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  18. Ultralow-Power Digital Correlator for Microwave Polarimetry

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Hass, K. Joseph

    2004-01-01

    A recently developed high-speed digital correlator is especially well suited for processing readings of a passive microwave polarimeter. This circuit computes the autocorrelations of, and the cross-correlations among, data in four digital input streams representing samples of in-phase (I) and quadrature (Q) components of two intermediate-frequency (IF) signals, denoted A and B, that are generated in heterodyne reception of two microwave signals. The IF signals arriving at the correlator input terminals have been digitized to three levels (-1,0,1) at a sampling rate up to 500 MHz. Two bits (representing sign and magnitude) are needed to represent the instantaneous datum in each input channel; hence, eight bits are needed to represent the four input signals during any given cycle of the sampling clock. The accumulation (integration) time for the correlation is programmable in increments of 2(exp 8) cycles of the sampling clock, up to a maximum of 2(exp 24) cycles. The basic functionality of the correlator is embodied in 16 correlation slices, each of which contains identical logic circuits and counters (see figure). The first stage of each correlation slice is a logic gate that computes one of the desired correlations (for example, the autocorrelation of the I component of A or the negative of the cross-correlation of the I component of A and the Q component of B). The sampling of the output of the logic gate output is controlled by the sampling-clock signal, and an 8-bit counter increments in every clock cycle when the logic gate generates output. The most significant bit of the 8-bit counter is sampled by a 16-bit counter with a clock signal at 2(exp 8) the frequency of the sampling clock. The 16-bit counter is incremented every time the 8-bit counter rolls over.

  19. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    NASA Astrophysics Data System (ADS)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  1. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    NASA Astrophysics Data System (ADS)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  2. Automatic speech recognition research at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.

    1977-01-01

    A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.

  3. Results of NanTroSEIZE Expeditions Stages 1 & 2: Deep-sea Coring Operations on-board the Deep-sea Drilling Vessel Chikyu and Development of Coring Equipment for Stage 3

    NASA Astrophysics Data System (ADS)

    Shinmoto, Y.; Wada, K.; Miyazaki, E.; Sanada, Y.; Sawada, I.; Yamao, M.

    2010-12-01

    The Nankai-Trough Seismogenic Zone Experiment (NanTroSEIZE) has carried out several drilling expeditions in the Kumano Basin off the Kii-Peninsula of Japan with the deep-sea scientific drilling vessel Chikyu. Core sampling runs were carried out during the expeditions using an advanced multiple wireline coring system which can continuously core into sections of undersea formations. The core recovery rate with the Rotary Core Barrel (RCB) system was rather low as compared with other methods such as the Hydraulic Piston Coring System (HPCS) and Extended Shoe Coring System (ESCS). Drilling conditions such as hole collapse and sea conditions such as high ship-heave motions need to be analyzed along with differences in lithology, formation hardness, water depth and coring depth in order to develop coring tools, such as the core barrel or core bit, that will yield the highest core recovery and quality. The core bit is especially important in good recovery of high quality cores, however, the PDC cutters were severely damaged during the NanTroSEIZE Stages 1 & 2 expeditions due to severe drilling conditions. In the Stage 1 (riserless coring) the average core recovery was rather low at 38 % with the RCB and many difficulties such as borehole collapse, stick-slip and stuck pipe occurred, causing the damage of several of the PDC cutters. In Stage 2, a new design for the core bit was deployed and core recovery was improved at 67 % for the riserless system and 85 % with the riser. However, due to harsh drilling conditions, the PDC core bit and all of the PDC cutters were completely worn down. Another original core bit was also deployed, however, core recovery performance was low even for plate boundary core samples. This study aims to identify the influence of the RCB system specifically on the recovery rates at each of the holes drilled in the NanTroSEIZE coring expeditions. The drilling parameters such as weight-on-bit, torque, rotary speed and flow rate, etc., were analyzed and conditions such as formation, tools, and sea conditions which directly affect core recovery have been categorized. Also discussed will be the further development of such coring equipment as the core bit and core barrel for the NanTroSEIZE Stage 3 expeditions, which aim to reach a depth of 7000 m-below the sea floor into harder formations under extreme drilling conditions.

  4. High-velocity frictional strength across the Tohoku-Oki megathrust determined from surface drilling torque

    NASA Astrophysics Data System (ADS)

    Ujiie, K.; Inoue, T.; Ishiwata, J.

    2015-12-01

    Frictional strength at seismic slip rates is a key to evaluate fault weakening and rupture propagation during earthquakes. The Japan Trench First Drilling Project (JFAST) drilled through the shallow plate-boundary thrust, where huge displacements of ~50 m occurred during the 2011 Tohoku-Oki earthquake. To determine the downhole frictional strength at drilled site (Site C0019), we analyzed surface drilling data. The equivalent slip rate estimated from the rotation rate and inner and outer radiuses of the drill bit ranges from 0.8 to 1.3 m/s. The measured torque includes the frictional torque between the drilling string and borehole wall, the viscous torque between the drilling string and seawater/drilling fluid, and the drilling torque between the drill bit and sediments. We subtracted the former two from the measured torque using the torque data during bottom-up rotating operations at several depths. Then, the shear stress was calculated from the drilling torque taking the configuration of the drill bit into consideration. The normal stress was estimated from the weight on bit data and the projected area of the drill bit. Assuming negligible cohesion, the frictional strength was obtained by dividing shear stress by normal stress. The results show a clear contrast in high-velocity frictional strength across the plate-boundary thrust: the friction coefficient of frontal prism sediments (hemipelagic mudstones) in hanging wall is 0.1-0.2, while that in subducting sediments (hemipelagic to pelagic mudstones and chert) in footwall increases to 0.2-0.4. The friction coefficient of smectite-rich pelagic clay in the plate-boundary thrust is ~0.1, which is consistent with that obtained from high-velocity (1.3 m/s) friction experiments and temperature measurements. We conclude that surface drilling torque provides useful data to obtain a continuous downhole frictional strength.

  5. Microprocessor design for GaAs technology

    NASA Astrophysics Data System (ADS)

    Milutinovic, Veljko M.

    Recent advances in the design of GaAs microprocessor chips are examined in chapters contributed by leading experts; the work is intended as reading material for a graduate engineering course or as a practical R&D reference. Topics addressed include the methodology used for the architecture, organization, and design of GaAs processors; GaAs device physics and circuit design; design concepts for microprocessor-based GaAs systems; a 32-bit GaAs microprocessor; a 32-bit processor implemented in GaAs JFET; and a direct coupled-FET-logic E/D-MESFET experimental RISC machine. Drawings, micrographs, and extensive circuit diagrams are provided.

  6. Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time

    NASA Technical Reports Server (NTRS)

    Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John

    2008-01-01

    Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.

  7. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  8. Optical modulator system

    NASA Technical Reports Server (NTRS)

    Brand, J.

    1972-01-01

    The fabrication, test, and delivery of an optical modulator system which will operate with a mode-locked Nd:YAG laser indicating at either 1.06 or 0.53 micrometers is discussed. The delivered hardware operates at data rates up to 400 Mbps and includes a 0.53 micrometer electrooptic modulator, a 1.06 micrometer electrooptic modulator with power supply and signal processing electronics with power supply. The modulators contain solid state drivers which accept digital signals with MECL logic levels, temperature controllers to maintain a stable thermal environment for the modulator crystals, and automatic electronic compensation to maximize the extinction ratio. The modulators use two lithium tantalate crystals cascaded in a double pass configuration. The signal processing electronics include encoding electronics which are capable of digitizing analog signals between the limit of + or - 0.75 volts at a maximum rate of 80 megasamples per second with 5 bit resolution. The digital samples are serialized and made available as a 400 Mbps serial NRZ data source for the modulators. A pseudorandom (PN) generator is also included in the signal processing electronics. This data source generates PN sequences with lengths between 31 bits and 32,767 bits in a serial NRZ format at rates up to 400 Mbps.

  9. Sequenced subjective accents for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.

    2011-06-01

    Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.

  10. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  11. Performance evaluations of hybrid modulation with different optical labels over PDQ in high bit-rate OLS network systems.

    PubMed

    Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W

    2016-11-14

    Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.

  12. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    PubMed

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  13. Performance Analysis of OCDMA Based on AND Detection in FTTH Access Network Using PIN & APD Photodiodes

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.

    2011-06-01

    In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.

  14. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  15. Video watermarking for mobile phone applications

    NASA Astrophysics Data System (ADS)

    Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.

    2005-08-01

    Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.

  16. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  17. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  18. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  19. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  20. The Quanta Image Sensor: Every Photon Counts

    PubMed Central

    Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel

    2016-01-01

    The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926

  1. Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mehdi; Megías, David

    This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.

  2. Ultrasonic/Sonic Rotary-Hammer Drills

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph; Bao, Xiaoqi; Kassab, Steve

    2010-01-01

    Ultrasonic/sonic rotary-hammer drill (USRoHD) is a recent addition to the collection of apparatuses based on ultrasonic/sonic drill corer (USDC). As described below, the USRoHD has several features, not present in a basic USDC, that increase efficiency and provide some redundancy against partial failure. USDCs and related apparatuses were conceived for boring into, and/or acquiring samples of, rock or other hard, brittle materials of geological interest. They have been described in numerous previous NASA Tech Briefs articles. To recapitulate: A USDC can be characterized as a lightweight, lowpower, piezoelectrically driven jackhammer in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. A basic USDC includes a piezoelectric stack, an ultrasonic transducer horn connected to the stack, a free mass ( free in the sense that it can bounce axially a short distance between hard stops on the horn and the bit), and a tool bit. The piezoelectric stack creates ultrasonic vibrations that are mechanically amplified by the horn. The bouncing of the free mass between the hard stops generates the sonic vibrations. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that unlike in conventional twist drilling, little applied axial force is needed to make the apparatus advance into the material of interest. There are numerous potential applications for USDCs and related apparatuses in geological exploration on Earth and on remote planets. In early USDC experiments, it was observed that accumulation of cuttings in a drilled hole causes the rate of penetration of the USDC to decrease steeply with depth, and that the rate of penetration can be increased by removing the cuttings. The USRoHD concept provides for removal of cuttings in the same manner as that of a twist drill: An USRoHD includes a USDC and a motor with gearhead (see figure). The USDC provides the bit hammering and the motor provides the bit rotation. Like a twist drill bit, the shank of the tool bit of the USRoHD is fluted. As in the operation of a twist drill, the rotation of the fluted drill bit removes cuttings from the drilled hole. The USRoHD tool bit is tipped with a replaceable crown having cutting teeth on its front surface. The teeth are shaped to promote fracturing of the rock face through a combination of hammering and rotation of the tool bit. Helical channels on the outer cylindrical surface of the crown serve as a continuation of the fluted surface of the shank, helping to remove cuttings. In the event of a failure of the USDC, the USRoHD can continue to operate with reduced efficiency as a twist drill. Similarly, in the event of a failure of the gearmotor, the USRoHD can continue to operate with reduced efficiency as a USDC.

  3. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  4. A Wearable Healthcare System With a 13.7 μA Noise Tolerant ECG Processor.

    PubMed

    Izumi, Shintaro; Yamashita, Ken; Nakano, Masanao; Kawaguchi, Hiroshi; Kimura, Hiromitsu; Marumoto, Kyoji; Fuchikami, Takaaki; Fujimori, Yoshikazu; Nakajima, Hiroshi; Shiga, Toshikazu; Yoshimoto, Masahiko

    2015-10-01

    To prevent lifestyle diseases, wearable bio-signal monitoring systems for daily life monitoring have attracted attention. Wearable systems have strict size and weight constraints, which impose significant limitations of the battery capacity and the signal-to-noise ratio of bio-signals. This report describes an electrocardiograph (ECG) processor for use with a wearable healthcare system. It comprises an analog front end, a 12-bit ADC, a robust Instantaneous Heart Rate (IHR) monitor, a 32-bit Cortex-M0 core, and 64 Kbyte Ferroelectric Random Access Memory (FeRAM). The IHR monitor uses a short-term autocorrelation (STAC) algorithm to improve the heart-rate detection accuracy despite its use in noisy conditions. The ECG processor chip consumes 13.7 μA for heart rate logging application.

  5. Compensation for first-order polarization-mode dispersion by using a novel tunable compensator

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Ning, Tigang; Pei, Shanshan; Xing, Yujun; Jian, Shuisheng

    2005-01-01

    Polarization-related impairments have become a critical issue for high-data-rate optical systems, particularly when considering polarization-mode dispersion (PMD). Consequently, compensation of PMD, especially for the first-order PMD is necessary to maintain adequate performance in long-haul systems at a high bit rate of 10 Gb/s or beyond. In this paper, we successfully demonstrated automatic and tunable compensation for first-order polarization-mode dispersion. Furthermore, we reported the statistical assessment of this tunable compensator at 10 Gbit/s. Experimental results, including bit error rate measurements, are successfully compared with theory, therefore demonstrating the compensator efficiency at 10 Gbit/s. The first-order PMD was max 274 ps before PMD compensation, and it was lower than 7ps after PMD compensation.

  6. Video on phone lines: technology and applications

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1996-03-01

    Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.

  7. Achieving the Holevo bound via a bisection decoding protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosati, Matteo; Giovannetti, Vittorio

    2016-06-15

    We present a new decoding protocol to realize transmission of classical information through a quantum channel at asymptotically maximum capacity, achieving the Holevo bound and thus the optimal communication rate. At variance with previous proposals, our scheme recovers the message bit by bit, making use of a series of “yes-no” measurements, organized in bisection fashion, thus determining which codeword was sent in log{sub 2} N steps, N being the number of codewords.

  8. Quasi-elastic light scattering: Signal storage, correlation, and spectrum analysis under control of an 8-bit microprocessor

    NASA Astrophysics Data System (ADS)

    Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter

    1987-03-01

    The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.

  9. A 13.56-mbps pulse delay modulation based transceiver for simultaneous near-field data and power transmission.

    PubMed

    Kiani, Mehdi; Ghovanloo, Maysam

    2015-02-01

    A fully-integrated near-field wireless transceiver has been presented for simultaneous data and power transmission across inductive links, which operates based on pulse delay modulation (PDM) technique. PDM is a low-power carrier-less modulation scheme that offers wide bandwidth along with robustness against strong power carrier interference, which makes it suitable for implantable neuroprosthetic devices, such as retinal implants. To transmit each bit, a pattern of narrow pulses are generated at the same frequency of the power carrier across the transmitter (Tx) data coil with specific time delays to initiate decaying ringing across the tuned receiver (Rx) data coil. This ringing shifts the zero-crossing times of the undesired power carrier interference on the Rx data coil, resulting in a phase shift between the signals across Rx power and data coils, from which the data bit stream can be recovered. A PDM transceiver prototype was fabricated in a 0.35- μm standard CMOS process, occupying 1.6 mm(2). The transceiver achieved a measured 13.56 Mbps data rate with a raw bit error rate (BER) of 4.3×10(-7) at 10 mm distance between figure-8 data coils, despite a signal-to-interference ratio (SIR) of -18.5 dB across the Rx data coil. At the same time, a class-D power amplifier, operating at 13.56 MHz, delivered 42 mW of regulated power across a separate pair of high-Q power coils, aligned with the data coils. The PDM data Tx and Rx power consumptions were 960 pJ/bit and 162 pJ/bit, respectively, at 1.8 V supply voltage.

  10. Percussive Augmenter of Rotary Drills (PARoD)

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Chang, Zensheu; Donnelly, Chris; Aldrich, Jack

    2012-01-01

    Increasingly, NASA exploration mission objectives include sample acquisition tasks for in-situ analysis or for potential sample return to Earth. To address the requirements for samplers that could be operated at the conditions of the various bodies in the solar system, a piezoelectric actuated percussive sampling device was developed that requires low preload (as low as 10N) which is important for operation at low gravity. This device can be made as light as 400g, can be operated using low average power, and can drill rocks as hard as basalt. Significant improvement of the penetration rate was achieved by augmenting the hammering action by rotation and use of a fluted bit to provide effective cuttings removal. Generally, hammering is effective in fracturing drilled media while rotation of fluted bits is effective in cuttings removal. To benefit from these two actions, a novel configuration of a percussive mechanism was developed to produce an augmenter of rotary drills. The device was called Percussive Augmenter of Rotary Drills (PARoD). A breadboard PARoD was developed with a 6.4 mm (0.25 in) diameter bit and was demonstrated to increase the drilling rate of rotation alone by 1.5 to over 10 times. Further, a large PARoD breadboard with 50.8 mm diameter bit was developed and its tests are currently underway. This paper presents the design, analysis and preliminary test results of the percussive augmenter.

  11. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  12. An optical disk archive for a data base management system

    NASA Technical Reports Server (NTRS)

    Thomas, Douglas T.

    1985-01-01

    An overview is given of a data base management system that can catalog and archive data at rates up to 50M bits/sec. Emphasis is on the laser disk system that is used for the archive. All key components in the system (3 Vax 11/780s, a SEL 32/2750, a high speed communication interface, and the optical disk) are interfaced to a 100M bits/sec 16-port fiber optic bus to achieve the high data rates. The basic data unit is an autonomous data packet. Each packet contains a primary and secondary header and can be up to a million bits in length. The data packets are recorded on the optical disk at the same time the packet headers are being used by the relational data base management software ORACLE to create a directory independent of the packet recording process. The user then interfaces to the VAX that contains the directory for a quick-look scan or retrieval of the packet(s). The total system functions are distributed between the VAX and the SEL. The optical disk unit records the data with an argon laser at 100M bits/sec from its buffer, which is interfaced to the fiber optic bus. The same laser is used in the read cycle by reducing the laser power. Additional information is given in the form of outlines, charts, and diagrams.

  13. Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications

    NASA Astrophysics Data System (ADS)

    Choi, Jinseok; Evans, Brian L.; Gatherer, Alan

    2017-12-01

    In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.

  14. Design of a High-Speed and Compact Electro-Optic Modulator using Silicon-Germanium HBT

    NASA Astrophysics Data System (ADS)

    Neogi, Tuhin Guha

    Optical interconnects between electronics systems have attracted significant attention and development for a number of years because optical links have demonstrated potential advantages for high-speed, low-power, and interference immunity. With increasing system speed and greater bandwidth requirements, the distance over which optical communication is useful has continually decreased to chip-to-chip and on-chip levels. Monolithic integration of photonics and electronics will significantly reduce the cost of optical components and further combine the functionalities of chips on the same or different boards or systems. Modulators are one of the fundamental building blocks for optical interconnects. High-speed modulation and low driving voltage are the keys for the device's practical use. In this study two separate designs show that using a graded base SiGe HBT we can modulate light at high speeds with moderate length and dynamic power consumption. The first design analyzes the terminal characteristics of the HBT and a close match is obtained in comparison with npn HBTs using IBM.s 8HP technology. This suggests that the modulator can be manufactured using the IBM 8HP fabrication process. At a sub-collector depth of 0.4 mum and at a base-emitter swing of 0 V to 1.1 V, this model predicts a bit rate of 80 Gbit/s. Optical simulations predict a pi phase shift length (Lpi) of 240.8 mum with an extinction ratio of 7.5 dB at a wavelength of 1.55 mum. Additionally, the trade-off between the switching speed, Lpi and propagation loss with a thinner sub-collector is analyzed and reported. The dynamic power consumption is reported to be 3.6 pJ /bit. The second design examine a theoretical aggressively-scaled SiGe HBT that may approximate a device that is two device generations more advanced than available today. At a base-emitter swing of 0 V to 1.0 V, this model predicts a bit rate of 250 Gbit/s. Optical simulations predict a pi phase shift length (Lpi) of 204 mum, with an extinction ratio of 13.2 dB at a wavelength of 1.55 mum. The dynamic power consumption is reported to be 2.01 pJ /bit. This study also discusses the design of driver circuitry at 80 Gbit/s with voltage swing levels of 1.03V. Finally the use of slow wave structures and use of SiGe HBT as a linear analog modulator is introduced.

  15. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  16. Method of joint bit rate/modulation format identification and optical performance monitoring using asynchronous delay-tap sampling for radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Guesmi, Latifa; Menif, Mourad

    2016-08-01

    In the context of carrying a wide variety of modulation formats and data rates for home networks, the study covers the radio-over-fiber (RoF) technology, where the need for an alternative way of management, automated fault diagnosis, and formats identification is expressed. Also, RoF signals in an optical link are impaired by various linear and nonlinear effects including chromatic dispersion, polarization mode dispersion, amplified spontaneous emission noise, and so on. Hence, for this purpose, we investigated the sampling method based on asynchronous delay-tap sampling in conjunction with a cross-correlation function for the joint bit rate/modulation format identification and optical performance monitoring. Three modulation formats with different data rates are used to demonstrate the validity of this technique, where the identification accuracy and the monitoring ranges reached high values.

  17. Experimental demonstration of the optical multi-mesh hypercube: scaleable interconnection network for multiprocessors and multicomputers.

    PubMed

    Louri, A; Furlonge, S; Neocleous, C

    1996-12-10

    A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.

  18. A fully integrated mixed-signal neural processor for implantable multichannel cortical recording.

    PubMed

    Sodagar, Amir M; Wise, Kensall D; Najafi, Khalil

    2007-06-01

    A 64-channel neural processor has been developed for use in an implantable neural recording microsystem. In the Scan Mode, the processor is capable of detecting neural spikes by programmable positive, negative, or window thresholding. Spikes are tagged with their associated channel addresses and formed into 18-bit data words that are sent serially to the external host. In the Monitor Mode, two channels can be selected and viewed at high resolution for studies where the entire signal is of interest. The processor runs from a 3-V supply and a 2-MHz clock, with a channel scan rate of 64 kS/s and an output bit rate of 2 Mbps.

  19. Performance analysis of bi-directional broadband passive optical network using erbium-doped fiber amplifier

    NASA Astrophysics Data System (ADS)

    Almalaq, Yasser; Matin, Mohammad A.

    2014-09-01

    The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.

  20. Quantum random number generator based on quantum nature of vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  1. Drilling plastic formations using highly polished PDC cutters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.H.; Lund, J.B.; Anderson, M.

    1995-12-31

    Highly plastic and over-pressured formations are troublesome for both roller cone and PDC bits. Thus far, attempts to increase penetration rates in these formations have centered around re-designing the bit or modifying the cutting structure. These efforts have produced only moderate improvements. This paper presents both laboratory and field data to illustrate the benefits of applying a mirror polished surface to the face of PDC cutters in drilling stressed formations. These cutters are similar to traditional PDC cutters, with the exception of the reflective mirror finish, applied to the diamond table surfaces prior to their installation in the bit. Resultsmore » of tests conducted in a single point cutter apparatus and a full-scale drilling simulator will be presented and discussed. Field results will be presented that demonstrate the effectiveness of polished cutters, in both water and oil-based muds. Increases in penetration rates of 300-400% have been observed in the Wilcox formation and other highly pressured shales. Typically, the beneficial effects of polished cutters have been realized at depths greater than 7000 ft, and with mud weights exceeding 12 ppg.« less

  2. Experimental research of adaptive OFDM and OCT precoding with a high SE for VLLC system

    NASA Astrophysics Data System (ADS)

    Liu, Shuang-ao; He, Jing; Chen, Qinghui; Deng, Rui; Zhou, Zhihua; Chen, Shenghai; Chen, Lin

    2017-09-01

    In this paper, an adaptive orthogonal frequency division multiplexing (OFDM) modulation scheme with 128/64/32/16-quadrature amplitude modulation (QAM) and orthogonal circulant matrix transform (OCT) precoding is proposed and experimentally demonstrated for a visible laser light communication (VLLC) system with a cost-effective 450-nm blue-light laser diode (LD). The performance of OCT precoding is compared with conventional the adaptive Discrete Fourier Transform-spread (DFT-spread) OFDM scheme, 32 QAM OCT precoding OFDM scheme, 64 QAM OCT precoding OFDM scheme and adaptive OCT precoding OFDM scheme. The experimental results show that OCT precoding can achieve a relatively flat signal-to-noise ratio (SNR) curve, and it can provide performance improvement in bit error rate (BER). Furthermore, the BER of the proposed OFDM signal with a raw bit rate 5.04 Gb/s after 5-m free space transmission is less than 20% of soft-decision forward error correlation (SD-FEC) threshold of 2.4 × 10-2, and the spectral efficiency (SE) of 4.2 bit/s/Hz can be successfully achieved.

  3. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  4. Utilizing a language model to improve online dynamic data collection in P300 spellers.

    PubMed

    Mainsah, Boyla O; Colwell, Kenneth A; Collins, Leslie M; Throckmorton, Chandra S

    2014-07-01

    P300 spellers provide a means of communication for individuals with severe physical limitations, especially those with locked-in syndrome, such as amyotrophic lateral sclerosis. However, P300 speller use is still limited by relatively low communication rates due to the multiple data measurements that are required to improve the signal-to-noise ratio of event-related potentials for increased accuracy. Therefore, the amount of data collection has competing effects on accuracy and spelling speed. Adaptively varying the amount of data collection prior to character selection has been shown to improve spelling accuracy and speed. The goal of this study was to optimize a previously developed dynamic stopping algorithm that uses a Bayesian approach to control data collection by incorporating a priori knowledge via a language model. Participants ( n = 17) completed online spelling tasks using the dynamic stopping algorithm, with and without a language model. The addition of the language model resulted in improved participant performance from a mean theoretical bit rate of 46.12 bits/min at 88.89% accuracy to 54.42 bits/min ( ) at 90.36% accuracy.

  5. An Ultra-Low Power Charge Redistribution Successive Approximation Register A/D Converter for Biomedical Applications.

    PubMed

    Koppa, Santosh; Mohandesi, Manouchehr; John, Eugene

    2016-12-01

    Power consumption is one of the key design constraints in biomedical devices such as pacemakers that are powered by small non rechargeable batteries over their entire life time. In these systems, Analog to Digital Convertors (ADCs) are used as interface between analog world and digital domain and play a key role. In this paper we present the design of an 8-bit Charge Redistribution Successive Approximation Register (CR-SAR) analog to digital converter in standard TSMC 0.18μm CMOS technology for low power and low data rate devices such as pacemakers. The 8-bit optimized CR-SAR ADC achieves low power of less than 250nW with conversion rate of 1KB/s. This ADC achieves integral nonlinearity (INL) and differential nonlinearity (DNL) less than 0.22 least significant bit (LSB) and less than 0.04 LSB respectively as compared to the standard requirement for the INL and DNL errors to be less than 0.5 LSB. The designed ADC operates at 1V supply voltage converting input ranging from 0V to 250mV.

  6. The possibility of applying spectral redundancy in DWDM systems on existing long-distance FOCLs for increasing the data transmission rate and decreasing nonlinear effects and double Rayleigh scattering without changes in the communication channel

    NASA Astrophysics Data System (ADS)

    Nekuchaev, A. O.; Shuteev, S. A.

    2014-04-01

    A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.

  7. Experimental Demonstration of Long-Range Underwater Acoustic Communication Using a Vertical Sensor Array

    PubMed Central

    Zhao, Anbang; Zeng, Caigao; Hui, Juan; Ma, Lin; Bi, Xuejie

    2017-01-01

    This paper proposes a composite channel virtual time reversal mirror (CCVTRM) for vertical sensor array (VSA) processing and applies it to long-range underwater acoustic (UWA) communication in shallow water. Because of weak signal-to-noise ratio (SNR), it is unable to accurately estimate the channel impulse response of each sensor of the VSA, thus the traditional passive time reversal mirror (PTRM) cannot perform well in long-range UWA communication in shallow water. However, CCVTRM only needs to estimate the composite channel of the VSA to accomplish time reversal mirror (TRM), which can effectively mitigate the inter-symbol interference (ISI) and reduce the bit error rate (BER). In addition, the calculation of CCVTRM is simpler than traditional PTRM. An UWA communication experiment using a VSA of 12 sensors was conducted in the South China Sea. The experiment achieves a very low BER communication at communication rate of 66.7 bit/s over an 80 km range. The results of the sea trial demonstrate that CCVTRM is feasible and can be applied to long-range UWA communication in shallow water. PMID:28653976

  8. Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1997-01-01

    In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).

  9. Conceptual design of a 10 to the 8th power bit magnetic bubble domain mass storage unit and fabrication, test and delivery of a feasibility model

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The conceptual design of a highly reliable 10 to the 8th power-bit bubble domain memory for the space program is described. The memory has random access to blocks of closed-loop shift registers, and utilizes self-contained bubble domain chips with on-chip decoding. Trade-off studies show that the highest reliability and lowest power dissipation is obtained when the memory is organized on a bit-per-chip basis. The final design has 800 bits/register, 128 registers/chip, 16 chips/plane, and 112 planes, of which only seven are activated at a time. A word has 64 data bits +32 checkbits, used in a 16-adjacent code to provide correction of any combination of errors in one plane. 100 KHz maximum rotational frequency keeps power low (equal to or less than, 25 watts) and also allows asynchronous operation. Data rate is 6.4 megabits/sec, access time is 200 msec to an 800-word block and an additional 4 msec (average) to a word. The fabrication and operation are also described for a 64-bit bubble domain memory chip designed to test the concept of on-chip magnetic decoding. Access to one of the chip's four shift registers for the read, write, and clear functions is by means of bubble domain decoders utilizing the interaction between a conductor line and a bubble.

  10. The Effects of Bit Wear on Respirable Silica Dust, Noise and Productivity: A Hammer Drill Bench Study.

    PubMed

    Carty, Paul; Cooper, Michael R; Barr, Alan; Neitzel, Richard L; Balmes, John; Rempel, David

    2017-07-01

    Hammer drills are used extensively in commercial construction for drilling into concrete for tasks including rebar installation for structural upgrades and anchor bolt installation. This drilling task can expose workers to respirable silica dust and noise. The aim of this pilot study was to evaluate the effects of bit wear on respirable silica dust, noise, and drilling productivity. Test bits were worn to three states by drilling consecutive holes to different cumulative drilling depths: 0, 780, and 1560 cm. Each state of bit wear was evaluated by three trials (nine trials total). For each trial, an automated laboratory test bench system drilled 41 holes 1.3 cm diameter, and 10 cm deep into concrete block at a rate of one hole per minute using a commercially available hammer drill and masonry bits. During each trial, dust was continuously captured by two respirable and one inhalable sampling trains and noise was sampled with a noise dosimeter. The room was thoroughly cleaned between trials. When comparing results for the sharp (0 cm) versus dull bit (1560 cm), the mean respirable silica increased from 0.41 to 0.74 mg m-3 in sampler 1 (P = 0.012) and from 0.41 to 0.89 mg m-3 in sampler 2 (P = 0.024); levels above the NIOSH recommended exposure limit of 0.05 mg m-3. Likewise, mean noise levels increased from 112.8 to 114.4 dBA (P < 0.00001). Drilling productivity declined with increasing wear from 10.16 to 7.76 mm s-1 (P < 0.00001). Increasing bit wear was associated with increasing respirable silica dust and noise and reduced drilling productivity. The levels of dust and noise produced by these experimental conditions would require dust capture, hearing protection, and possibly respiratory protection. The findings support the adoption of a bit replacement program by construction contractors. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  11. Optimization of Mud Hammer Drilling Performance--A Program to Benchmark the Viability of Advanced Mud Hammer Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2006-03-01

    Operators continue to look for ways to improve hard rock drilling performance through emerging technologies. A consortium of Department of Energy, operator and industry participants put together an effort to test and optimize mud driven fluid hammers as one emerging technology that has shown promise to increase penetration rates in hard rock. The thrust of this program has been to test and record the performance of fluid hammers in full scale test conditions including, hard formations at simulated depth, high density/high solids drilling muds, and realistic fluid power levels. This paper details the testing and results of testing two 7more » 3/4 inch diameter mud hammers with 8 1/2 inch hammer bits. A Novatek MHN5 and an SDS Digger FH185 mud hammer were tested with several bit types, with performance being compared to a conventional (IADC Code 537) tricone bit. These tools functionally operated in all of the simulated downhole environments. The performance was in the range of the baseline ticone or better at lower borehole pressures, but at higher borehole pressures the performance was in the lower range or below that of the baseline tricone bit. A new drilling mode was observed, while operating the MHN5 mud hammer. This mode was noticed as the weight on bit (WOB) was in transition from low to high applied load. During this new ''transition drilling mode'', performance was substantially improved and in some cases outperformed the tricone bit. Improvements were noted for the SDS tool while drilling with a more aggressive bit design. Future work includes the optimization of these or the next generation tools for operating in higher density and higher borehole pressure conditions and improving bit design and technology based on the knowledge gained from this test program.« less

  12. Missile Manufacturing Technology Conference Held at Hilton Head Island, South Carolina on 22-26 September 1975. Panel Presentations. Test Equipment

    DTIC Science & Technology

    1975-01-01

    in the computer in 16 bit parallel computer DIO transfers at the max- imum computer I/O speed. it then transmits this data in a bit- serial echo...maximum DIO rate under computer interrupt control. The LCI also provides station interrupt information for transfer to the computer under computer...been in daily operation since 1973. The SAM-D Missile system is currently in the Engineering De - velopment phase which precedes the Production and

  13. VINSON/AUTOVON Interface Applique for the Modem, Digital Data, AN/GSC-38

    DTIC Science & Technology

    1980-11-01

    Measurement Indication Result Before Step 6 None Noise and beeping are heard in handset After Step 7 None Noise and beepi ng disappear Condition Measurement...linear range due to the compression used. Lowering the levels below the compression range may give increased linearity, but may cause signal-to- noise ...are encountered where the bit error rate at 16 KB/S results is objectionable audio noise or causes the KY-58 to squelch. On these channels the bit

  14. Some practical universal noiseless coding techniques, part 3, module PSl14,K+

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1991-01-01

    The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.

  15. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  16. Verification testing of the compression performance of the HEVC screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng

    2017-09-01

    This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.

  17. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  18. Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system

    NASA Astrophysics Data System (ADS)

    Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.

    2017-11-01

    Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage

  19. Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

    PubMed

    Li, Mengfan; Li, Wei; Zhou, Huihui

    2016-02-01

    Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.

  20. Miniaturized module for the wireless transmission of measurements with Bluetooth.

    PubMed

    Roth, H; Schwaibold, M; Moor, C; Schöchlin, J; Bolz, A

    2002-01-01

    The wiring of patients for obtaining medical measurements has many disadvantages. In order to limit these, a miniaturized module was developed which digitalizes analog signals and sends the signal wirelessly to the receiver using Bluetooth. Bluetooth is especially suitable for this application because distances of up to 10 m are possible with low power consumption and robust transmission with encryption. The module consists of a Bluetooth chip, which is initialized in such a way by a microcontroller that connections from other bluetooth receivers can be accepted. The signals are then transmitted to the distant end. The maximum bit rate of the 23 mm x 30 mm module is 73.5 kBit/s. At 4.7 kBit/s, the current consumption is 12 mA.

  1. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    NASA Astrophysics Data System (ADS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  2. Performance of convolutionally encoded noncoherent MFSK modem in fading channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1976-01-01

    The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.

  3. Practical quantum private query with better performance in resisting joint-measurement attack

    NASA Astrophysics Data System (ADS)

    Wei, Chun-Yan; Wang, Tian-Yin; Gao, Fei

    2016-04-01

    As a kind of practical protocol, quantum-key-distribution (QKD)-based quantum private queries (QPQs) have drawn lots of attention. However, joint-measurement (JM) attack poses a noticeable threat to the database security in such protocols. That is, by JM attack a malicious user can illegally elicit many more items from the database than the average amount an honest one can obtain. Taking Jacobi et al.'s protocol as an example, by JM attack a malicious user can obtain as many as 500 bits, instead of the expected 2.44 bits, from a 104-bit database in one query. It is a noticeable security flaw in theory, and would also arise in application with the development of quantum memories. To solve this problem, we propose a QPQ protocol based on a two-way QKD scheme, which behaves much better in resisting JM attack. Concretely, the user Alice cannot get more database items by conducting JM attack on the qubits because she has to send them back to Bob (the database holder) before knowing which of them should be jointly measured. Furthermore, JM attack by both Alice and Bob would be detected with certain probability, which is quite different from previous protocols. Moreover, our protocol retains the good characters of QKD-based QPQs, e.g., it is loss tolerant and robust against quantum memory attack.

  4. Audio Steganography with Embedded Text

    NASA Astrophysics Data System (ADS)

    Teck Jian, Chua; Chai Wen, Chuah; Rahman, Nurul Hidayah Binti Ab.; Hamid, Isredza Rahmi Binti A.

    2017-08-01

    Audio steganography is about hiding the secret message into the audio. It is a technique uses to secure the transmission of secret information or hide their existence. It also may provide confidentiality to secret message if the message is encrypted. To date most of the steganography software such as Mp3Stego and DeepSound use block cipher such as Advanced Encryption Standard or Data Encryption Standard to encrypt the secret message. It is a good practice for security. However, the encrypted message may become too long to embed in audio and cause distortion of cover audio if the secret message is too long. Hence, there is a need to encrypt the message with stream cipher before embedding the message into the audio. This is because stream cipher provides bit by bit encryption meanwhile block cipher provide a fixed length of bits encryption which result a longer output compare to stream cipher. Hence, an audio steganography with embedding text with Rivest Cipher 4 encryption cipher is design, develop and test in this project.

  5. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  6. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  7. Quantum-capacity-approaching codes for the detected-jump channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassl, Markus; Wei Zhaohui; Ji Zhengfeng

    2010-12-15

    The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less

  8. Placing Evidence-Based Interventions at the Fingertips of School Social Workers.

    PubMed

    Castillo, Humberto López; Rivers, Tommi; Randall, Catherine; Gaughan, Ken; Ojanen, Tiina; Massey, Oliver Tom; Burton, Donna

    2016-07-01

    Through a university-community collaborative partnership, the perceived needs of evidence-based practices (EBPs) among school social workers (SSWs) in a large school district in central Florida was assessed. A survey (response rate = 83.6%) found that although 70% of SSWs claim to use EBPs in their everyday practice, 40% do not know where to find them, which may partially explain why 78% of respondents claim to spend 1 to 4 h every week looking for adequate EBPs. From this needs assessment, the translational model was used to address these perceived needs. A systematic review of the literature found 40 tier 2 EBPs, most of which (23%) target substance use, abuse, and dependence. After discussion with academic and community partners, the stakeholders designed, discussed, and implemented a searchable, online, password-protected, interface of these tier 2 EBPs, named Evidence-Based Intervention Toolkit (eBIT). Lessons learned, future directions, and implications of this "one-stop shop" for behavioral health are discussed.

  9. Placing Evidence-based Interventions at the Fingertips of School Social Workers

    PubMed Central

    Castillo, Humberto López; Rivers, Tommi; Randall, Catherine; Gaughan, Ken; Ojanen, Tiina; Massey, Oliver “Tom”; Burton, Donna

    2015-01-01

    Through a university-community collaborative partnership, the perceived needs of evidence-based practices (EBP) among school social workers (SSW) in a large school district in central Florida was assessed. A survey (response rate = 83.6%) found that although 70% of SSW claim to use EBP in their everyday practice, 40% do not know where to find them, which may partially explain why 78% of respondents claim to spend 1 to 4 hours every week looking for adequate EBP. From this needs assessment, the translational model was used to address these perceived needs. A systematic review of the literature found forty Tier 2 EBP, most of which (23%) target substance use, abuse, and dependence. After discussion with academic and community partners, the stakeholders designed, discussed, and implemented a searchable, online, password-protected, interface of these Tier 2 EBP, named eBIT (evidence-Based Intervention Toolkit). Lessons learned, future directions, and implications of this “one-stop shop” for behavioral health are discussed. PMID:26659382

  10. Practical ultrasonic transducers for high-temperature applications using bismuth titanate and Ceramabind 830

    NASA Astrophysics Data System (ADS)

    Xu, Janet L.; Batista, Caio F. G.; Tittmann, Bernhard R.

    2018-04-01

    Structural health monitoring of large valve bodies in high-temperature environments such as power plants faces several limitations: commercial transducers are not rated for such high temperatures, gel couplants will evaporate, and measurements cannot be made in-situ. To solve this, we have furthered the work of Ledford in applying a practical transducer in liquid form which hardens and air dries directly onto the substrate. The transducer material is a piezoceramic film composed of bismuth titanate and a high-temperature binding agent, Ceramabind 830. The effects of several fabrication conditions were studied to optimize transducer performance and ensure repeatability. These fabrication conditions include humidity, binder ratio, water ratio, substrate roughness, and film thickness. The final product is stable for both reactive and non-reactive substrates, has a quick fabrication time, and has an operating temperature up to the Curie temperature of BIT, 650°C, well beyond the safe operating temperature of PZT (150°C).

  11. The 10 to the 8th power bit solid state spacecraft data recorder. [utilizing bubble domain memory technology

    NASA Technical Reports Server (NTRS)

    Murray, G. W.; Bohning, O. D.; Kinoshita, R. Y.; Becker, F. J.

    1979-01-01

    The results are summarized of a program to demonstrate the feasibility of Bubble Domain Memory Technology as a mass memory medium for spacecraft applications. The design, fabrication and test of a partially populated 10 to the 8th power Bit Data Recorder using 100 Kbit serial bubble memory chips is described. Design tradeoffs, design approach and performance are discussed. This effort resulted in a 10 to the 8th power bit recorder with a volume of 858.6 cu in and a weight of 47.2 pounds. The recorder is plug reconfigurable, having the capability of operating as one, two or four independent serial channel recorders or as a single sixteen bit byte parallel input recorder. Data rates up to 1.2 Mb/s in a serial mode and 2.4 Mb/s in a parallel mode may be supported. Fabrication and test of the recorder demonstrated the basic feasibility of Bubble Domain Memory technology for such applications. Test results indicate the need for improvement in memory element operating temperature range and detector performance.

  12. A high SFDR 6-bit 20-MS/s SAR ADC based on time-domain comparator

    NASA Astrophysics Data System (ADS)

    Xue, Han; Hua, Fan; Qi, Wei; Huazhong, Yang

    2013-08-01

    This paper presents a 6-bit 20-MS/s high spurious-free dynamic range (SFDR) and low power successive approximation register analog to digital converter (SAR ADC) for the radio-frequency (RF) transceiver front-end, especially for wireless sensor network (WSN) applications. This ADC adopts the modified common-centroid symmetry layout and the successive approximation register reset circuit to improve the linearity and dynamic range. Prototyped in a 0.18-μm 1P6M CMOS technology, the ADC performs a peak SFDR of 55.32 dB and effective number of bits (ENOB) of 5.1 bit for 10 MS/s. At the sample rate of 20 MS/s and the Nyquist input frequency, the 47.39-dB SFDR and 4.6-ENOB are achieved. The differential nonlinearity (DNL) is less than 0.83 LSB and the integral nonlinearity (INL) is less than 0.82 LSB. The experimental results indicate that this SAR ADC consumes a total of 522 μW power and occupies 0.98 mm2.

  13. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  14. Multi-bit wavelength coding phase-shift-keying optical steganography based on amplified spontaneous emission noise

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng

    2018-01-01

    In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.

  15. Compact FPGA-based beamformer using oversampled 1-bit A/D converters.

    PubMed

    Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt

    2005-05-01

    A compact medical ultrasound beamformer architecture that uses oversampled 1-bit analog-to-digital (A/D) converters is presented. Sparse sample processing is used, as the echo signal for the image lines is reconstructed in 512 equidistant focal points along the line through its in-phase and quadrature components. That information is sufficient for presenting a B-mode image and creating a color flow map. The high sampling rate provides the necessary delay resolution for the focusing. The low channel data width (1-bit) makes it possible to construct a compact beamformer logic. The signal reconstruction is done using finite impulse reponse (FIR) filters, applied on selected bit sequences of the delta-sigma modulator output stream. The approach allows for a multichannel beamformer to fit in a single field programmable gate array (FPGA) device. A 32-channel beamformer is estimated to occupy 50% of the available logic resources in a commercially available mid-range FPGA, and to be able to operate at 129 MHz. Simulation of the architecture at 140 MHz provides images with a dynamic range approaching 60 dB for an excitation frequency of 3 MHz.

  16. Areal density optimizations for heat-assisted magnetic recording of high-density media

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk

    2016-06-01

    Heat-assisted magnetic recording (HAMR) is hoped to be the future recording technique for high-density storage devices. Nevertheless, there exist several realization strategies. With a coarse-grained Landau-Lifshitz-Bloch model, we investigate in detail the benefits and disadvantages of a continuous and pulsed laser spot recording of shingled and conventional bit-patterned media. Additionally, we compare single-phase grains and bits having a bilayer structure with graded Curie temperature, consisting of a hard magnetic layer with high TC and a soft magnetic one with low TC, respectively. To describe the whole write process as realistically as possible, a distribution of the grain sizes and Curie temperatures, a displacement jitter of the head, and the bit positions are considered. For all these cases, we calculate bit error rates of various grain patterns, temperatures, and write head positions to optimize the achievable areal storage density. Within our analysis, shingled HAMR with a continuous laser pulse moving over the medium reaches the best results and thus has the highest potential to become the next-generation storage device.

  17. HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Radtke; David Glowka; Man Mohan Rai

    2008-03-31

    Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hardmore » and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole coiled tube drilling offers the opportunity to dramatically cut producers' exploration risk to a level comparable to that of drilling development wells. Together, such efforts hold great promise for economically recovering a sizeable portion of the estimated remaining shallow (less than 5,000 feet subsurface) oil resource in the United States. The DOE estimates this U.S. targeted shallow resource at 218 billion barrels. Furthermore, the smaller 'footprint' of the lightweight rigs utilized for microhole drilling and the accompanying reduced drilling waste disposal volumes offer the bonus of added environmental benefits. DOE analysis shows that microhole technology has the potential to cut exploratory drilling costs by at least a third and to slash development drilling costs in half.« less

  18. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  19. Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)

    1998-01-01

    The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.

  20. 3  ×  3 optical switch by exploiting vortex beam emitters based on silicon microrings with superimposed gratings.

    PubMed

    Scaffardi, Mirco; Malik, Muhammad N; Lazzeri, Emma; Klitis, Charalambos; Meriggi, Laura; Zhang, Ning; Sorel, Marc; Bogoni, Antonella

    2017-10-01

    A silicon-on-insulator microring with three superimposed gratings is proposed and characterized as a device enabling 3×3 optical switching based on orbital angular momentum and wavelength as switching domains. Measurements show penalties with respect to the back-to-back of <1  dB at a bit error rate of 10 -9 for OOK traffic up to 20 Gbaud. Different switch configuration cases are implemented, with measured power penalty variations of less than 0.5 dB at bit error rates of 10 -9 . An analysis is also carried out to highlight the dependence of the number of switch ports on the design parameters of the multigrating microring.

  1. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING'' for the period starting June 23, 2003 through September 30, 2004. TerraTek has reviewed applicable literature and documentation and has convened a project kick-off meeting with Industry Advisors in attendance. TerraTek has designed and planned Phase I bench scale experiments. Some difficulties in obtaining ultra-high speed motors for this feasibility work were encountered though they were sourced mid 2004. TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. Some improvements over early NASA experiments have been identified.« less

  2. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING'' for the period starting June 23, 2003 through September 30, 2004. (1) TerraTek has reviewed applicable literature and documentation and has convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek has designed and planned Phase I bench scale experiments. Some difficulties in obtaining ultra-high speed motors for this feasibility work were encountered though they were sourced mid 2004. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. Some improvements over early NASA experiments have been identified.« less

  3. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  4. The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors

    NASA Technical Reports Server (NTRS)

    Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan

    1993-01-01

    Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.

  5. Practical steganalysis of digital images: state of the art

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2002-04-01

    Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.

  6. A 128K-bit CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K. H.; Wallace, R. W.; Robinson, C. R.

    1976-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.

  7. AESA diagnostics in operational environments

    NASA Astrophysics Data System (ADS)

    Hull, W. P.

    The author discusses some possible solutions to ASEA (active electronically scanned array) diagnostics in the operational environment using built-in testing (BIT), which can play a key role in reducing life-cycle cost if accurately implemented. He notes that it is highly desirable to detect and correct in the operational environment all degradation that impairs mission performance. This degradation must be detected with low false alarm rate and the appropriate action initiated consistent with low life-cycle cost. Mutual coupling is considered as a BIT signal injection method and is shown to have potential. However, the limits of the diagnostic capability using this method clearly depend on its stability and on the level of multipath for a specific application. BIT using mutual coupling may need to be supplemented on the ground by an externally mounted passive antenna that interfaces with onboard avionics.

  8. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.

    PubMed

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-15

    The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.

  9. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  10. Efficient use of bit planes in the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1988-01-01

    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.

  11. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  12. The extraction and use of facial features in low bit-rate visual communication.

    PubMed

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  13. A broadband ASE light source-based full-duplex FTTX/ROF transport system.

    PubMed

    Chang, Ching-Hung; Lu, Hai-Han; Su, Heng-Sheng; Shih, Chien-Liang; Chen, Kai-Jen

    2009-11-23

    A full-duplex fiber-to-the-X (FTTX)/radio-over-fiber (ROF) transport system based on a broadband amplified spontaneous emission (ASE) light source is proposed and demonstrated for rural wide-spread villages. Combining the concepts of long-transmission transmission and ring topology, a long-haul single-mode fiber (SMF) trunk is sharing with multiple rural villages. Externally modulated baseband (BB) (1.25 Gbps) and radio-frequency (RF) (622 Mbps/10 GHz) signals are successfully transmitted simultaneously. Good bit error rate (BER) performance was achieved to demonstrate the practice of providing wire/wireless connections for long-haul wide-spread rural villages. Since our proposed system uses only a broadband ASE light source to achieve multi-wavelengths transmissions, it also reveals an outstanding one with simpler and more economic advantages.

  14. LiFi: transforming fibre into wireless

    NASA Astrophysics Data System (ADS)

    Yin, Liang; Islim, Mohamed Sufyan; Haas, Harald

    2017-01-01

    Light-fidelity (LiFi) uses energy-efficient light-emitting diodes (LEDs) for high-speed wireless communication, and it has a great potential to be integrated with fibre communication for future gigabit networks. However, by making fibre communication wireless, multiuser interference arises. Traditional methods use orthogonal multiple access (OMA) for interference avoidance. In this paper, multiuser interference is exploited with the use of non-orthogonal multiple access (NOMA) relying on successive interference cancellation (SIC). The residual interference due to imperfect SIC in practical scenarios is characterized with a proportional model. Results show that NOMA offers 5 -10 dB gain on the equivalent signal-to-interference-plus-noise ratio (SINR) over OMA. The bit error rate (BER) performance of direct current optical orthogonal frequency division multiplexing (DCO-OFDM) is shown to be significantly improved when SIC is used.

  15. Gigahertz repetition rate, sub-femtosecond timing jitter optical pulse train directly generated from a mode-locked Yb:KYW laser.

    PubMed

    Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon

    2014-01-01

    We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).

  16. Traffic Management in ATM Networks Over Satellite Links

    NASA Technical Reports Server (NTRS)

    Goyal, Rohit; Jain, Raj; Goyal, Mukul; Fahmy, Sonia; Vandalore, Bobby; vonDeak, Thomas

    1999-01-01

    This report presents a survey of the traffic management Issues in the design and implementation of satellite Asynchronous Transfer Mode (ATM) networks. The report focuses on the efficient transport of Transmission Control Protocol (TCP) traffic over satellite ATM. First, a reference satellite ATM network architecture is presented along with an overview of the service categories available in ATM networks. A delay model for satellite networks and the major components of delay and delay variation are described. A survey of design options for TCP over Unspecified Bit Rate (UBR), Guaranteed Frame Rate (GFR) and Available Bit Rate (ABR) services in ATM is presented. The main focus is on traffic management issues. Several recommendations on the design options for efficiently carrying data services over satellite ATM networks are presented. Most of the results are based on experiments performed on Geosynchronous (GEO) latencies. Some results for Low Earth Orbits (LEO) and Medium Earth Orbit (MEO) latencies are also provided.

  17. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  18. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  19. Self-optimization and auto-stabilization of receiver in DPSK transmission system.

    PubMed

    Jang, Y S

    2008-03-17

    We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.

  20. Large-Constraint-Length, Fast Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.

    1990-01-01

    Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.

  1. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  2. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  3. Chaos-on-a-chip secures data transmission in optical fiber links.

    PubMed

    Argyris, Apostolos; Grivas, Evangellos; Hamacher, Michael; Bogris, Adonis; Syvridis, Dimitris

    2010-03-01

    Security in information exchange plays a central role in the deployment of modern communication systems. Besides algorithms, chaos is exploited as a real-time high-speed data encryption technique which enhances the security at the hardware level of optical networks. In this work, compact, fully controllable and stably operating monolithic photonic integrated circuits (PICs) that generate broadband chaotic optical signals are incorporated in chaos-encoded optical transmission systems. Data sequences with rates up to 2.5 Gb/s with small amplitudes are completely encrypted within these chaotic carriers. Only authorized counterparts, supplied with identical chaos generating PICs that are able to synchronize and reproduce the same carriers, can benefit from data exchange with bit-rates up to 2.5Gb/s with error rates below 10(-12). Eavesdroppers with access to the communication link experience a 0.5 probability to detect correctly each bit by direct signal detection, while eavesdroppers supplied with even slightly unmatched hardware receivers are restricted to data extraction error rates well above 10(-3).

  4. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  5. The Buried in Treasures Workshop: waitlist control trial of facilitated support groups for hoarding.

    PubMed

    Frost, Randy O; Ruby, Dylan; Shuer, Lee J

    2012-11-01

    Hoarding is a serious form of psychopathology that has been associated with significant health and safety concerns, as well as the source of social and economic burden (Tolin, Frost, Steketee, & Fitch, 2008; Tolin, Frost, Steketee, Gray, & Fitch, 2008). Recent developments in the treatment of hoarding have met with some success for both individual and group treatments. Nevertheless, the cost and limited accessibility of these treatments leave many hoarding sufferers without options for help. One alternative is support groups that require relatively few resources. Frost, Pekareva-Kochergina, and Maxner (2011) reported significant declines in hoarding symptoms following a non-professionally run 13-week support group (The Buried in Treasures [BIT] Workshop). The BIT Workshop is a highly structured and short term support group. The present study extended these findings by reporting on the results of a waitlist control trial of the BIT Workshop. Significant declines in all hoarding symptom measures were observed compared to a waitlist control. The treatment response rate for the BIT Workshop was similar to that obtained by previous individual and group treatment studies, despite its shorter length and lack of a trained therapist. The BIT Workshop may be an effective adjunct to cognitive behavior therapy for hoarding disorder, or an alternative when cognitive behavior therapy is inaccessible. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  7. Pulsed laser-based optical frequency comb generator for high capacity wavelength division multiplexed passive optical network supporting 1.2 Tbps

    NASA Astrophysics Data System (ADS)

    Ullah, Rahat; Liu, Bo; Zhang, Qi; Saad Khan, Muhammad; Ahmad, Ibrar; Ali, Amjad; Khan, Razaullah; Tian, Qinghua; Yan, Cheng; Xin, Xiangjun

    2016-09-01

    An architecture for flattened and broad spectrum multicarriers is presented by generating 60 comb lines from pulsed laser driven by user-defined bit stream in cascade with three modulators. The proposed scheme is a cost-effective architecture for optical line terminal (OLT) in wavelength division multiplexed passive optical network (WDM-PON) system. The optical frequency comb generator consists of a pulsed laser in cascade with a phase modulator and two Mach-Zehnder modulators driven by an RF source incorporating no phase shifter, filter, or electrical amplifier. Optical frequency comb generation is deployed in the simulation environment at OLT in WDM-PON system supports 1.2-Tbps data rate. With 10-GHz frequency spacing, each frequency tone carries data signal of 20 Gbps-based differential quadrature phase shift keying (DQPSK) in downlink transmission. We adopt DQPSK-based modulation technique in the downlink transmission because it supports 2 bits per symbol, which increases the data rate in WDM-PON system. Furthermore, DQPSK format is tolerant to different types of dispersions and has a high spectral efficiency with less complex configurations. Part of the downlink power is utilized in the uplink transmission; the uplink transmission is based on intensity modulated on-off keying. Minimum power penalties have been observed with excellent eye diagrams and other transmission performances at specified bit error rates.

  8. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    NASA Astrophysics Data System (ADS)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  9. Epistemic lenses and virtues, beyond evidence-based medicine.

    PubMed

    Murphy, Mark E

    2018-06-01

    This editorial is based on the keynote by Dr Mark Murphy, Department of General Practice, Royal College of Surgeons, Ireland, at the Health Libraries Group conference, Keele University on 13-15 June 2018. https://bit.ly/2rubsIR#HLG2018. © 2018 Health Libraries Group.

  10. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  11. Optical Fiber Transmission In A Picture Archiving And Communication System For Medical Applications

    NASA Astrophysics Data System (ADS)

    Aaron, Gilles; Bonnard, Rene

    1984-03-01

    In an hospital, the need for an electronic communication network is increasing along with the digitization of pictures. This local area network is intended to link some picture sources such as digital radiography, computed tomography, nuclear magnetic resonance, ultrasounds etc...with an archiving system. Interactive displays can be used in examination rooms, physicians offices and clinics. In such a system, three major requirements must be considered : bit-rate, cable length, and number of devices. - The bit-rate is very important because a maximum response time of a few seconds must be guaranteed for several mega-bit pictures. - The distance between nodes may be a few kilometers in some large hospitals. - The number of devices connected to the network is never greater than a few tens because picture sources and computers represent important hardware, and simple displays can be concentrated. All these conditions are fulfilled by optical fiber transmissions. Depending on the topology and the access protocol, two solutions are to be considered - Active ring - Active or passive star Finally Thomson-CSF developments of optical transmission devices for large networks of TV distribution bring us a technological support and a mass produc-tion which will cut down hardware costs.

  12. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    PubMed

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  13. Adaptive limited feedback for interference alignment in MIMO interference channels.

    PubMed

    Zhang, Yang; Zhao, Chenglin; Meng, Juan; Li, Shibao; Li, Li

    2016-01-01

    It is very important that the radar sensor network has autonomous capabilities such as self-managing, etc. Quite often, MIMO interference channels are applied to radar sensor networks, and for self-managing purpose, interference management in MIMO interference channels is critical. Interference alignment (IA) has the potential to dramatically improve system throughput by effectively mitigating interference in multi-user networks at high signal-to-noise (SNR). However, the implementation of IA predominantly relays on perfect and global channel state information (CSI) at all transceivers. A large amount of CSI has to be fed back to all transmitters, resulting in a proliferation of feedback bits. Thus, IA with limited feedback has been introduced to reduce the sum feedback overhead. In this paper, by exploiting the advantage of heterogeneous path loss, we first investigate the throughput of IA with limited feedback in interference channels while each user transmits multi-streams simultaneously, then we get the upper bound of sum rate in terms of the transmit power and feedback bits. Moreover, we propose a dynamic feedback scheme via bit allocation to reduce the throughput loss due to limited feedback. Simulation results demonstrate that the dynamic feedback scheme achieves better performance in terms of sum rate.

  14. Performance of the unique-word-reverse-modulation type demodulator for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Dohi, Tomohiro; Nitta, Kazumasa; Ueda, Takashi

    1993-01-01

    This paper proposes a new type of coherent demodulator, the unique-word (UW)-reverse-modulation type demodulator, for burst signal controlled by voice operated transmitter (VOX) in mobile satellite communication channels. The demodulator has three individual circuits: a pre-detection signal combiner, a pre-detection UW detector, and a UW-reverse-modulation type demodulator. The pre-detection signal combiner combines signal sequences received by two antennas and improves bit energy-to-noise power density ratio (E(sub b)/N(sub 0)) 2.5 dB to yield 10(exp -3) average bit error rate (BER) when carrier power-to-multipath power ratio (CMR) is 15 dB. The pre-detection UW detector improves UW detection probability when the frequency offset is large. The UW-reverse-modulation type demodulator realizes a maximum pull-in frequency of 3.9 kHz, the pull-in time is 2.4 seconds and frequency error is less than 20 Hz. The performances of this demodulator are confirmed through computer simulations and its effect is clarified in real-time experiments at a bit rate of 16.8 kbps using a digital signal processor (DSP).

  15. Optimal micro-mirror tilt angle and sync mark design for digital micro-mirror device based collinear holographic data storage system.

    PubMed

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi

    2017-06-01

    The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.

  16. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.

    PubMed

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.

  17. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces

    PubMed Central

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170

  18. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620

  19. Fast interactive elastic registration of 12-bit multi-spectral images with subvoxel accuracy using display hardware

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-03-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  20. Analysis of practical backoff protocols for contention resolution with multiple servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; MacKenzie, P.D.

    Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less

  1. Passive Faraday-mirror attack in a practical two-way quantum-key-distribution system

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Jiang, Mu-Sheng; Liang, Lin-Mei

    2011-06-01

    The Faraday mirror (FM) plays a very important role in maintaining the stability of two-way plug-and-play quantum key distribution (QKD) systems. However, the practical FM is imperfect, which will not only introduce an additional quantum bit error rate (QBER) but also leave a loophole for Eve to spy the secret key. In this paper we propose a passive Faraday mirror attack in two-way QKD system based on the imperfection of FM. Our analysis shows that if the FM is imperfect, the dimension of Hilbert space spanned by the four states sent by Alice is three instead of two. Thus Eve can distinguish these states with a set of Positive Operator Valued Measure (POVM) operators belonging to three-dimension space, which will reduce the QBER induced by her attack. Furthermore, a relationship between the degree of the imperfection of FM and the transmittance of the practical QKD system is obtained. The results show that the probability that Eve loads her attack successfully depends on the degree of the imperfection of FM rapidly, but the QBER induced by Eve’s attack changes slightly with the degree of the FM imperfection.

  2. Progress In Optical Memory Technology

    NASA Astrophysics Data System (ADS)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  3. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  4. New LWD tools are just in time to probe for baby elephants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghiselin, D.

    Development of sophisticated formation evaluation instrumentation for use while drilling has led to a stratification of while-drilling services. Measurements while drilling (MWD) comprises measurements of mechanical parameters like weight-on-bit, mud pressures, torque, vibration, hole angle and direction. Logging while drilling (LWD) describes resistivity, sonic, and radiation logging which rival wireline measurements in accuracy. A critical feature of LWD is the rate that data can be telemetered to the surface. Early tools could only transmit 3 bits per second one way. In the last decade, the data rate has more than tripled. Despite these improvements, LWD tools have the ability tomore » make many more measurements than can be telemetered in real-time. The paper discusses the development of this technology and its applications.« less

  5. Security of counterfactual quantum cryptography

    NASA Astrophysics Data System (ADS)

    Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Han, Zheng-Fu; Guo, Guang-Can

    2010-10-01

    Recently, a “counterfactual” quantum-key-distribution scheme was proposed by T.-G. Noh [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.103.230501 103, 230501 (2009)]. In this scheme, two legitimate distant peers may share secret keys even when the information carriers are not traveled in the quantum channel. We find that this protocol is equivalent to an entanglement distillation protocol. According to this equivalence, a strict security proof and the asymptotic key bit rate are both obtained when a perfect single-photon source is applied and a Trojan horse attack can be detected. We also find that the security of this scheme is strongly related to not only the bit error rate but also the yields of photons. And our security proof may shed light on the security of other two-way protocols.

  6. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; hide

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  7. Fronthaul evolution: From CPRI to Ethernet

    NASA Astrophysics Data System (ADS)

    Gomes, Nathan J.; Chanclou, Philippe; Turnbull, Peter; Magee, Anthony; Jungnickel, Volker

    2015-12-01

    It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved.

  8. Noise tolerance in wavelength-selective switching of optical differential quadrature-phase-shift-keying pulse train by collinear acousto-optic devices.

    PubMed

    Goto, Nobuo; Miyazaki, Yasumitsu

    2014-06-01

    Optical switching of high-bit-rate quadrature-phase-shift-keying (QPSK) pulse trains using collinear acousto-optic (AO) devices is theoretically discussed. Since the collinear AO devices have wavelength selectivity, the switched optical pulse trains suffer from distortion when the bandwidth of the pulse train is comparable to the pass bandwidth of the AO device. As the AO device, a sidelobe-suppressed device with a tapered surface-acoustic-wave (SAW) waveguide and a Butterworth-type filter device with a lossy SAW directional coupler are considered. Phase distortion of optical pulse trains at 40 to 100  Gsymbols/s in QPSK format is numerically analyzed. Bit-error-rate performance with additive Gaussian noise is also evaluated by the Monte Carlo method.

  9. Measurements of aperture averaging on bit-error-rate

    NASA Astrophysics Data System (ADS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-08-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  10. Comparisons of single event vulnerability of GaAs SRAMS

    NASA Astrophysics Data System (ADS)

    Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.

    1986-12-01

    A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.

  11. The NEEDS Data Base Management and Archival Mass Memory System

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.

    1980-01-01

    A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.

  12. Experimental demonstration of real-time adaptively modulated DDO-OFDM systems with a high spectral efficiency up to 5.76bit/s/Hz transmission over SMF links.

    PubMed

    Chen, Ming; He, Jing; Tang, Jin; Wu, Xian; Chen, Lin

    2014-07-28

    In this paper, a FPGAs-based real-time adaptively modulated 256/64/16QAM-encoded base-band OFDM transceiver with a high spectral efficiency up to 5.76bit/s/Hz is successfully developed, and experimentally demonstrated in a simple intensity-modulated direct-detection optical communication system. Experimental results show that it is feasible to transmit a raw signal bit rate of 7.19Gbps adaptively modulated real-time optical OFDM signal over 20km and 50km single mode fibers (SMFs). The performance comparison between real-time and off-line digital signal processing is performed, and the results show that there is a negligible power penalty. In addition, to obtain the best transmission performance, direct-current (DC) bias voltage for MZM and launch power into optical fiber links are explored in the real-time optical OFDM systems.

  13. High range free space optic transmission using new dual diffuser modulation technique

    NASA Astrophysics Data System (ADS)

    Rahman, A. K.; Julai, N.; Jusoh, M.; Rashidi, C. B. M.; Aljunid, S. A.; Anuar, M. S.; Talib, M. F.; Zamhari, Nurdiani; Sahari, S. k.; Tamrin, K. F.; Jong, Rudiyanto P.; Zaidel, D. N. A.; Mohtadzar, N. A. A.; Sharip, M. R. M.; Samat, Y. S.

    2017-11-01

    Free space optical communication fsoc is vulnerable with fluctuating atmospheric. This paper focus analyzes the finding of new technique dual diffuser modulation (ddm) to mitigate the atmospheric turbulence effect. The performance of fsoc under the presence of atmospheric turbulence will cause the laser beam keens to (a) beam wander, (b) beam spreading and (c) scintillation. The most deteriorate the fsoc is scintillation where it affected the wavefront cause to fluctuating signal and ultimately receiver can turn into saturate or loss signal. Ddm approach enhances the detecting bit `1' and bit `0' and improves the power received to combat with turbulence effect. The performance focus on signal-to-noise (snr) and bit error rate (ber) where the numerical result shows that the ddm technique able to improves the range where estimated approximately 40% improvement under weak turbulence and 80% under strong turbulence.

  14. Physical layer one-time-pad data encryption through synchronized semiconductor laser networks

    NASA Astrophysics Data System (ADS)

    Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris

    2016-02-01

    Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.

  15. Suppressing flashes of items surrounding targets during calibration of a P300-based brain-computer interface improves performance

    NASA Astrophysics Data System (ADS)

    Frye, G. E.; Hauser, C. K.; Townsend, G.; Sellers, E. W.

    2011-04-01

    Since the introduction of the P300 brain-computer interface (BCI) speller by Farwell and Donchin in 1988, the speed and accuracy of the system has been significantly improved. Larger electrode montages and various signal processing techniques are responsible for most of the improvement in performance. New presentation paradigms have also led to improvements in bit rate and accuracy (e.g. Townsend et al (2010 Clin. Neurophysiol. 121 1109-20)). In particular, the checkerboard paradigm for online P300 BCI-based spelling performs well, has started to document what makes for a successful paradigm, and is a good platform for further experimentation. The current paper further examines the checkerboard paradigm by suppressing items which surround the target from flashing during calibration (i.e. the suppression condition). In the online feedback mode the standard checkerboard paradigm is used with a stepwise linear discriminant classifier derived from the suppression condition and one classifier derived from the standard checkerboard condition, counter-balanced. The results of this research demonstrate that using suppression during calibration produces significantly more character selections/min ((6.46) time between selections included) than the standard checkerboard condition (5.55), and significantly fewer target flashes are needed per selection in the SUP condition (5.28) as compared to the RCP condition (6.17). Moreover, accuracy in the SUP and RCP conditions remained equivalent (~90%). Mean theoretical bit rate was 53.62 bits/min in the suppression condition and 46.36 bits/min in the standard checkerboard condition (ns). Waveform morphology also showed significant differences in amplitude and latency.

  16. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  17. Computer Series, 65. Bits and Pieces, 26.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1985-01-01

    Describes: (l) a microcomputer-based system for filing test questions and assembling examinations; (2) microcomputer use in practical and simulated experiments of gamma rays scattering by outer shell electrons; (3) an interactive, screen-oriented, general linear regression program; and (4) graphics drill and game programs for benzene synthesis.…

  18. "Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice

    ERIC Educational Resources Information Center

    Sanacore, Joseph

    2013-01-01

    Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…

  19. Learning may need only a few bits of synaptic precision

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-05-01

    Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.

  20. Robust relativistic bit commitment

    NASA Astrophysics Data System (ADS)

    Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony

    2016-12-01

    Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.

Top