Acceptable bit-rates for human face identification from CCTV imagery
NASA Astrophysics Data System (ADS)
Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker
2013-01-01
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.
Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Gavito, D.; Azar, J.J.
1994-09-01
During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effectsmore » of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.« less
Modulation and synchronization technique for MF-TDMA system
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Sayegh, Soheil
1994-01-01
This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.
Next generation PET data acquisition architectures
NASA Astrophysics Data System (ADS)
Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.
1997-06-01
New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
NASA Technical Reports Server (NTRS)
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
Technology Development and Field Trials of EGS Drilling Systems at Chocolate Mountain
Steven Knudsen
2012-01-01
Polycrystalline diamond compact (PDC) bits are routinely used in the oil and gas industry for drilling medium to hard rock but have not been adopted for geothermal drilling, largely due to past reliability issues and higher purchase costs. The Sandia Geothermal Research Department has recently completed a field demonstration of the applicability of advanced synthetic diamond drill bits for production geothermal drilling. Two commercially-available PDC bits were tested in a geothermal drilling program in the Chocolate Mountains in Southern California. These bits drilled the granitic formations with significantly better Rate of Penetration (ROP) and bit life than the roller cone bit they are compared with. Drilling records and bit performance data along with associated drilling cost savings are presented herein. The drilling trials have demonstrated PDC bit drilling technology has matured for applicability and improvements to geothermal drilling. This will be especially beneficial for development of Enhanced Geothermal Systems whereby resources can be accessed anywhere within the continental US by drilling to deep, hot resources in hard, basement rock formations.
A new optical post-equalization based on self-imaging
NASA Astrophysics Data System (ADS)
Guizani, S.; Cheriti, A.; Razzak, M.; Boulslimani, Y.; Hamam, H.
2005-09-01
Driven by the world's growing need for communication bandwidth, progress is constantly being reported in building newer fibers that are capable of handling the rapid increase in traffic. However, building an optical fiber link is a major investment, one that is very expensive to replace. A major impairment that restricts the achievement of higher bit rates with standard single mode fiber is chromatic dispersion. This is particularly problematic for systems operating in the 1550 nm band, where the chromatic dispersion limit decreases rapidly in inverse proportion to the square of the bit rate. For the first time, to the best of our knowledge, this document illustrates a new optical technique to post compensate optically the chromatic dispersion in fiber using temporal Talbot effect in ranges exceeding the 40G bit/s. We propose a new optical post equalization solutions based on the self imaging of Talbot effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasner, Evan; Bearden, Sean; Žutić, Igor, E-mail: zigor@buffalo.edu
Digital operation of lasers with injected spin-polarized carriers provides an improved operation over their conventional counterparts with spin-unpolarized carriers. Such spin-lasers can attain much higher bit rates, crucial for optical communication systems. The overall quality of a digital signal in these two types of lasers is compared using eye diagrams and quantified by improved Q-factors and bit-error-rates in spin-lasers. Surprisingly, an optimal performance of spin-lasers requires finite, not infinite, spin-relaxation times, giving a guidance for the design of future spin-lasers.
Study on the Effect of Diamond Grain Size on Wear of Polycrystalline Diamond Compact Cutter
NASA Astrophysics Data System (ADS)
Abdul-Rani, A. M.; Che Sidid, Adib Akmal Bin; Adzis, Azri Hamim Ab
2018-03-01
Drilling operation is one of the most crucial step in oil and gas industry as it proves the availability of oil and gas under the ground. Polycrystalline Diamond Compact (PDC) bit is a type of bit which is gaining popularity due to its high Rate of Penetration (ROP). However, PDC bit can easily wear off especially when drilling hard rock. The purpose of this study is to identify the relationship between the grain sizes of the diamond and wear rate of the PDC cutter using simulation-based study with FEA software (ABAQUS). The wear rates of a PDC cutter with a different diamond grain sizes were calculated from simulated cuttings of cutters against granite. The result of this study shows that the smaller the diamond grain size, the higher the wear resistivity of PDC cutter.
Hamming and Accumulator Codes Concatenated with MPSK or QAM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel
2009-01-01
In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
A comparison of orthogonal transformations for digital speech processing.
NASA Technical Reports Server (NTRS)
Campanella, S. J.; Robinson, G. S.
1971-01-01
Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Shuttle bit rate synchronizer. [signal to noise ratios and error analysis
NASA Technical Reports Server (NTRS)
Huey, D. C.; Fultz, G. L.
1974-01-01
A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.
Implications of scaling on static RAM bit cell stability and reliability
NASA Astrophysics Data System (ADS)
Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael
1993-01-01
In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnis Judzis
2006-03-01
Operators continue to look for ways to improve hard rock drilling performance through emerging technologies. A consortium of Department of Energy, operator and industry participants put together an effort to test and optimize mud driven fluid hammers as one emerging technology that has shown promise to increase penetration rates in hard rock. The thrust of this program has been to test and record the performance of fluid hammers in full scale test conditions including, hard formations at simulated depth, high density/high solids drilling muds, and realistic fluid power levels. This paper details the testing and results of testing two 7more » 3/4 inch diameter mud hammers with 8 1/2 inch hammer bits. A Novatek MHN5 and an SDS Digger FH185 mud hammer were tested with several bit types, with performance being compared to a conventional (IADC Code 537) tricone bit. These tools functionally operated in all of the simulated downhole environments. The performance was in the range of the baseline ticone or better at lower borehole pressures, but at higher borehole pressures the performance was in the lower range or below that of the baseline tricone bit. A new drilling mode was observed, while operating the MHN5 mud hammer. This mode was noticed as the weight on bit (WOB) was in transition from low to high applied load. During this new ''transition drilling mode'', performance was substantially improved and in some cases outperformed the tricone bit. Improvements were noted for the SDS tool while drilling with a more aggressive bit design. Future work includes the optimization of these or the next generation tools for operating in higher density and higher borehole pressure conditions and improving bit design and technology based on the knowledge gained from this test program.« less
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Security of quantum key distribution with multiphoton components
Yin, Hua-Lei; Fu, Yao; Mao, Yingqiu; Chen, Zeng-Bing
2016-01-01
Most qubit-based quantum key distribution (QKD) protocols extract the secure key merely from single-photon component of the attenuated lasers. However, with the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) QKD protocol, the unconditionally secure key can be extracted from the two-photon component by modifying the classical post-processing procedure in the BB84 protocol. Employing the merits of SARG04 QKD protocol and six-state preparation, one can extract secure key from the components of single photon up to four photons. In this paper, we provide the exact relations between the secure key rate and the bit error rate in a six-state SARG04 protocol with single-photon, two-photon, three-photon, and four-photon sources. By restricting the mutual information between the phase error and bit error, we obtain a higher secure bit error rate threshold of the multiphoton components than previous works. Besides, we compare the performances of the six-state SARG04 with other prepare-and-measure QKD protocols using decoy states. PMID:27383014
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
NASA Astrophysics Data System (ADS)
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
NASA Astrophysics Data System (ADS)
Sana, Ajaz; Saddawi, Samir; Moghaddassi, Jalil; Hussain, Shahab; Zaidi, Syed R.
2010-01-01
In this research paper we propose a novel Passive Optical Network (PON) based Mobile Worldwide Interoperability for Microwave Access (WiMAX) access network architecture to provide high capacity and performance multimedia services to mobile WiMAX users. Passive Optical Networks (PON) networks do not require powered equipment; hence they cost lower and need less network management. WiMAX technology emerges as a viable candidate for the last mile solution. In the conventional WiMAX access networks, the base stations and Multiple Input Multiple Output (MIMO) antennas are connected by point to point lines. Ideally in theory, the Maximum WiMAX bandwidth is assumed to be 70 Mbit/s over 31 miles. In reality, WiMAX can only provide one or the other as when operating over maximum range, bit error rate increases and therefore it is required to use lower bit rate. Lowering the range allows a device to operate at higher bit rates. Our focus in this research paper is to increase both range and bit rate by utilizing distributed cluster of MIMO antennas connected to WiMAX base stations with PON based topologies. A novel quality of service (QoS) algorithm is also proposed to provide admission control and scheduling to serve classified traffic. The proposed architecture presents flexible and scalable system design with different performance requirements and complexity.
Fly Photoreceptors Demonstrate Energy-Information Trade-Offs in Neural Coding
Niven, Jeremy E; Anderson, John C; Laughlin, Simon B
2007-01-01
Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1–6 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, ∼20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from ∼200 bits s−1 in D. melanogaster to ∼1,000 bits s−1 in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput. PMID:17373859
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Two-dimensional signal processing using a morphological filter for holographic memory
NASA Astrophysics Data System (ADS)
Kondo, Yo; Shigaki, Yusuke; Yamamoto, Manabu
2012-03-01
Today, along with the wider use of high-speed information networks and multimedia, it is increasingly necessary to have higher-density and higher-transfer-rate storage devices. Therefore, research and development into holographic memories with three-dimensional storage areas is being carried out to realize next-generation large-capacity memories. However, in holographic memories, interference between bits, which affect the detection characteristics, occurs as a result of aberrations such as the deviation of a wavefront in an optical system. In this study, we pay particular attention to the nonlinear factors that cause bit errors, where filters with a Volterra equalizer and the morphologies are investigated as a means of signal processing.
Blue Laser Diode Enables Underwater Communication at 12.4 Gbps
Wu, Tsai-Chen; Chi, Yu-Chieh; Wang, Huai-Yung; Tsai, Cheng-Ting; Lin, Gong-Ru
2017-01-01
To enable high-speed underwater wireless optical communication (UWOC) in tap-water and seawater environments over long distances, a 450-nm blue GaN laser diode (LD) directly modulated by pre-leveled 16-quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data was employed to implement its maximal transmission capacity of up to 10 Gbps. The proposed UWOC in tap water provided a maximal allowable communication bit rate increase from 5.2 to 12.4 Gbps with the corresponding underwater transmission distance significantly reduced from 10.2 to 1.7 m, exhibiting a bit rate/distance decaying slope of −0.847 Gbps/m. When conducting the same type of UWOC in seawater, light scattering induced by impurities attenuated the blue laser power, thereby degrading the transmission with a slightly higher decay ratio of 0.941 Gbps/m. The blue LD based UWOC enables a 16-QAM OFDM bit rate of up to 7.2 Gbps for transmission in seawater more than 6.8 m. PMID:28094309
NASA Astrophysics Data System (ADS)
Sugihara, Kenkoh
2009-10-01
A low-cost ADC (Analogue-to-Digital Converter) with shaping embedded for undergraduate physics laboratory is developed using a home made circuit and a PC sound card. Even though an ADC is needed as an essential part of an experimental set up, commercially available ones are very expensive and are scarce for undergraduate laboratory experiments. The system that is developed from the present work is designed for a gamma-ray spectroscopy laboratory with NaI(Tl) counters, but not limited. For this purpose, the system performance is set to sampling rate of 1-kHz with 10-bit resolution using a typical PC sound card with 41-kHz or higher sampling rate and 16-bit resolution ADC with an addition of a shaping circuit. Details of the system and the status of development will be presented. Ping circuit and PC soundcard as typical PC sound card has 41.1kHz or heiger sampling rate and 16bit resolution ADCs. In the conference details of the system and the status of development will be presented.
Method and apparatus for high speed data acquisition and processing
Ferron, J.R.
1997-02-11
A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.
Method and apparatus for high speed data acquisition and processing
Ferron, John R.
1997-01-01
A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation
NASA Astrophysics Data System (ADS)
Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo
2016-05-01
In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
2008-12-01
The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps
NASA Astrophysics Data System (ADS)
Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight
2011-01-01
In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.
Bit-rate transparent DPSK demodulation scheme based on injection locking FP-LD
NASA Astrophysics Data System (ADS)
Feng, Hanlin; Xiao, Shilin; Yi, Lilin; Zhou, Zhao; Yang, Pei; Shi, Jie
2013-05-01
We propose and demonstrate a bit-rate transparent differential phase shift-keying (DPSK) demodulation scheme based on injection locking multiple-quantum-well (MQW) strained InGaAsP FP-LD. By utilizing frequency deviation generated by phase modulation and unstable injection locking state with Fabry-Perot laser diode (FP-LD), DPSK to polarization shift-keying (PolSK) and PolSK to intensity modulation (IM) format conversions are realized. We analyze bit error rate (BER) performance of this demodulation scheme. Experimental results show that different longitude modes, bit rates and seeding power have influences on demodulation performance. We achieve error free DPSK signal demodulation under various bit rates of 10 Gbit/s, 5 Gbit/s, 2.5 Gbit/s and 1.25 Gbit/s with the same demodulation setting.
Efficient and robust quantum random number generation by photon number detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.
2015-08-17
We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less
Sleep stage classification with low complexity and low bit rate.
Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti
2009-01-01
Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Reducing temperature elevation of robotic bone drilling.
Feldmann, Arne; Wandel, Jasmin; Zysset, Philippe
2016-12-01
This research work aims at reducing temperature elevation of bone drilling. An extensive experimental study was conducted which focused on the investigation of three main measures to reduce the temperature elevation as used in industry: irrigation, interval drilling and drill bit designs. Different external irrigation rates (0 ml/min, 15 ml/min, 30 ml/min), continuously drilled interval lengths (2 mm, 1 mm, 0.5 mm) as well as two drill bit designs were tested. A custom single flute drill bit was designed with a higher rake angle and smaller chisel edge to generate less heat compared to a standard surgical drill bit. A new experimental setup was developed to measure drilling forces and torques as well as the 2D temperature field at any depth using a high resolution thermal camera. The results show that external irrigation is a main factor to reduce temperature elevation due not primarily to its effect on cooling but rather due to the prevention of drill bit clogging. During drilling, the build up of bone material in the drill bit flutes result in excessive temperatures due to an increase in thrust forces and torques. Drilling in intervals allows the removal of bone chips and cleaning of flutes when the drill bit is extracted as well as cooling of the bone in-between intervals which limits the accumulation of heat. However, reducing the length of the drilled interval was found only to be beneficial for temperature reduction using the newly designed drill bit due to the improved cutting geometry. To evaluate possible tissue damage caused by the generated heat increase, cumulative equivalent minutes (CEM43) were calculated and it was found that the combination of small interval length (0.5 mm), high irrigation rate (30 ml/min) and the newly designed drill bit was the only parameter combination which allowed drilling below the time-thermal threshold for tissue damage. In conclusion, an optimized drilling method has been found which might also enable drilling in more delicate procedures such as that performed during minimally invasive robotic cochlear implantation. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Room temperature single-photon detectors for high bit rate quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandar, L. C.; Patel, K. A.; Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.
Field-Deployable Video Cloud Solution
2016-03-01
78 2. Shipboard Server or Video Cloud System .......................................79 3. 4G LTE and Wi-Fi...local area network LED light emitting diode Li-ion lithium ion LTE long term evolution Mbps mega-bits per second MBps mega-bytes per second xv...restrictions on distribution. File size is dependent on both bit rate and content length. Bit rate is a value measured in bits per second (bps) and is
Modular high speed counter employing edge-triggered code
Vanstraelen, Guy F.
1993-06-29
A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a "0" to "1" transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.
Modular high speed counter employing edge-triggered code
Vanstraelen, G.F.
1993-06-29
A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a 0'' to 1'' transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Laboratory Equipment for Investigation of Coring Under Mars-like Conditions
NASA Astrophysics Data System (ADS)
Zacny, K.; Cooper, G.
2004-12-01
To develop a suitable drill bit and set of operating conditions for Mars sample coring applications, it is essential to make tests under conditions that match those of the mission. The goal of the laboratory test program was to determine the drilling performance of diamond-impregnated bits under simulated Martian conditions, particularly those of low pressure and low temperature in a carbon dioxide atmosphere. For this purpose, drilling tests were performed in a vacuum chamber kept at a pressure of 5 torr. Prior to drilling, a rock, soil or a clay sample was cooled down to minus 80 degrees Celsius (Zacny et al, 2004). Thus, all Martian conditions, except the low gravity were simulated in the controlled environment. Input drilling parameters of interest included the weight on bit and rotational speed. These two independent variables were controlled from a PC station. The dependent variables included the bit reaction torque, the depth of the bit inside the drilled hole and the temperatures at various positions inside the drilled sample, in the center of the core as it was being cut and at the bit itself. These were acquired every second by a data acquisition system. Additional information such as the rate of penetration and the drill power were calculated after the test was completed. The weight of the rock and the bit prior to and after the test were measured to aid in evaluating the bit performance. In addition, the water saturation of the rock was measured prior to the test. Finally, the bit was viewed under the Scanning Electron Microscope and the Stereo Optical Microscope. The extent of the bit wear and its salient features were captured photographically. The results revealed that drilling or coring under Martian conditions in a water saturated rock is different in many respects from drilling on Earth. This is mainly because the Martian atmospheric pressure is in the vicinity of the pressure at the triple point of water. Thus ice, heated by contact with the rotating bit, sublimed and released water vapor. The volumetric expansion of ice turning into a vapor was over 150 000 times. This continuously generated volume of gas effectively cleared the freeze-dried rock cuttings from the bottom of the hole. In addition, the subliming ice provided a powerful cooling effect that kept the bit cold and preserved the core in its original state. Keeping the rock core below freezing also reduced drastically the chances of cross contamination. To keep the bit cool in near vacuum conditions where convective cooling is poor, some intermittent stops would have to be made. Under virtually the same drilling conditions, coring under Martian low temperature and pressure conditions consumed only half the power while doubling the rate of penetration as compared to drilling under Earth atmospheric conditions. However, the rate of bit wear was much higher under Martian conditions (Zacny and Cooper, 2004) References Zacny, K. A., M. C. Quayle, and G. A. Cooper (2004), Laboratory drilling under Martian conditions yields unexpected results, J. Geophys. Res., 109, E07S16, doi:10.1029/2003JE002203. Zacny, K. A., and G. A. Cooper (2004), Investigation of diamond-impregnated drill bit wear while drilling under Earth and Mars conditions, J. Geophys. Res., 109, E07S10, doi:10.1029/2003JE002204. Acknowledgments The research supported by the NASA Astrobiology, Science and Technology Instrument Development (ASTID) program.
Meteor burst communications for LPI applications
NASA Astrophysics Data System (ADS)
Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.
A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon
2014-01-01
We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).
Purpose-built PDC bit successfully drills 7-in liner equipment and formation: An integrated solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puennel, J.G.A.; Huppertz, A.; Huizing, J.
1996-12-31
Historically, drilling out the 7-in, liner equipment has been a time consuming operation with a limited success ratio. The success of the operation is highly dependent on the type of drill bit employed. Tungsten carbide mills and mill tooth rock bits required from 7.5 to 11.5 hours respectively to drill the pack-off bushings, landing collar, shoe track and shoe. Rates of penetration dropped dramatically when drilling the float equipment. While conventional PDC bits have drilled the liner equipment successfully (averaging 9.7 hours), severe bit damage invariably prevented them from continuing to drill the formation at cost-effective penetration rates. This papermore » describes the integrated development and application of an IADC M433 Class PDC bit, which was designed specifically to drill out the 7-in. liner equipment and continue drilling the formation at satisfactory penetration rates. The development was the result of a joint investigation There the operator and bit/liner manufacturers shared their expertise in solving a drilling problem, The heavy-set bit was developed following drill-off tests conducted to investigate the drillability of the 7-in. liner equipment. Key features of the new bit and its application onshore The Netherlands will be presented and analyzed.« less
Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel
2012-06-01
Non-invasive brain-computer interface (BCI) based on electroencephalography (EEG) offers a new communication channel for people suffering from severe motor disorders. This paper presents a novel P300-based speller called lateral single-character (LSC). The LSC performance is compared to that of the standard row-column (RC) speller. We developed LSC, a single-character paradigm comprising all letters of the alphabet following an event strategy that significantly reduces the time for symbol selection, and explores the intrinsic hemispheric asymmetries in visual perception to improve the performance of the BCI. RC and LSC paradigms were tested by 10 able-bodied participants, seven participants with amyotrophic lateral sclerosis (ALS), five participants with cerebral palsy (CP), one participant with Duchenne muscular dystrophy (DMD), and one participant with spinal cord injury (SCI). The averaged results, taking into account all participants who were able to control the BCI online, were significantly higher for LSC, 26.11 bit/min and 89.90% accuracy, than for RC, 21.91 bit/min and 88.36% accuracy. The two paradigms produced different waveforms and the signal-to-noise ratio was significantly higher for LSC. Finally, the novel LSC also showed new discriminative features. The results suggest that LSC is an effective alternative to RC, and that LSC still has a margin for potential improvement in bit rate and accuracy. The high bit rates and accuracy of LSC are a step forward for the effective use of BCI in clinical applications. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Communication system analysis for manned space flight
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1977-01-01
One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.
Steganography based on pixel intensity value decomposition
NASA Astrophysics Data System (ADS)
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.
Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai
2017-02-20
Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A
2012-07-30
The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.
A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebner, Keith T. K., E-mail: kloebner@stanford.edu; Underwood, Thomas C.; Cappelli, Mark A.
2015-06-15
A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenummore » pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.« less
Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC.
Zhang, Yongbing; Shen, Tao; Ji, Xiangyang; Zhang, Yun; Xiong, Ruiqin; Dai, Qionghai
2018-08-01
High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software.
2006-06-01
called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus
Wear and performance: An experimental study on PDC bits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, O.; Azar, J.J.
1997-07-01
Real-time drilling data, gathered under full-scale conditions, was analyzed to determine the influence of cutter dullness on PDC-bit rate of penetration. It was found that while drilling in shale, the cutters` wearflat area was not a controlling factor on rate of penetration; however, when drilling in limestone, wearflat area significantly influenced PDC bit penetration performance. Similarly, the presence of diamond lips on PDC cutters was found to be unimportant while drilling in shale, but it greatly enhanced bit performance when drilling in limestone.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Radtke; David Glowka; Man Mohan Rai
2008-03-31
Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hardmore » and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole coiled tube drilling offers the opportunity to dramatically cut producers' exploration risk to a level comparable to that of drilling development wells. Together, such efforts hold great promise for economically recovering a sizeable portion of the estimated remaining shallow (less than 5,000 feet subsurface) oil resource in the United States. The DOE estimates this U.S. targeted shallow resource at 218 billion barrels. Furthermore, the smaller 'footprint' of the lightweight rigs utilized for microhole drilling and the accompanying reduced drilling waste disposal volumes offer the bonus of added environmental benefits. DOE analysis shows that microhole technology has the potential to cut exploratory drilling costs by at least a third and to slash development drilling costs in half.« less
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Antiwhirl PDC bits increased penetration rates in Alberta drilling. [Polycrystalline Diamond Compact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrosky, D.; Osmak, G.
1993-07-05
The antiwhirl PDC bits and an inhibitive mud system contributed to the quicker drilling of the time-sensitive shales. The hole washouts in the intermediate section were dramatically reduced, resulting in better intermediate casing cement jobs. Also, the use of antirotation PDC-drillable cementing plugs eliminated the need to drill out plugs and float equipment with a steel tooth bit and then trip for the PDC bit. By using an antiwhirl PDC bit, at least one trip was eliminated in the intermediate section. Offset data indicated that two to six conventional bits would have been required to drill the intermediate hole interval.more » The PDC bit was rebuildable and therefore rerunnable even after being used on five wells. In each instance, the cost of replacing chipped cutters was less than the cost of a new insert roller cone bit. The paper describes the antiwhirl bits; the development of the bits; and their application in a clastic sequence, a carbonate sequence, and the Shekilie oil field; the improvement in the rate of penetration; the selection of bottom hole assemblies; washout problems; and drill-out characteristics.« less
UWB channel estimation using new generating TR transceivers
Nekoogar, Faranak [San Ramon, CA; Dowla, Farid U [Castro Valley, CA; Spiridon, Alex [Palo Alto, CA; Haugen, Peter C [Livermore, CA; Benzel, Dave M [Livermore, CA
2011-06-28
The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Traffic management mechanism for intranets with available-bit-rate access to the Internet
NASA Astrophysics Data System (ADS)
Hassan, Mahbub; Sirisena, Harsha R.; Atiquzzaman, Mohammed
1997-10-01
The design of a traffic management mechanism for intranets connected to the Internet via an available bit rate access- link is presented. Selection of control parameters for this mechanism for optimum performance is shown through analysis. An estimate for packet loss probability at the access- gateway is derived for random fluctuation of available bit rate of the access-link. Some implementation strategies of this mechanism in the standard intranet protocol stack are also suggested.
On the relationships between higher and lower bit-depth system measurements
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.
2018-04-01
The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.
Long-distance entanglement-based quantum key distribution experiment using practical detectors.
Takesue, Hiroki; Harada, Ken-Ichi; Tamaki, Kiyoshi; Fukuda, Hiroshi; Tsuchizawa, Tai; Watanabe, Toshifumi; Yamada, Koji; Itabashi, Sei-Ichi
2010-08-02
We report an entanglement-based quantum key distribution experiment that we performed over 100 km of optical fiber using a practical source and detectors. We used a silicon-based photon-pair source that generated high-purity time-bin entangled photons, and high-speed single photon detectors based on InGaAs/InP avalanche photodiodes with the sinusoidal gating technique. To calculate the secure key rate, we employed a security proof that validated the use of practical detectors. As a result, we confirmed the successful generation of sifted keys over 100 km of optical fiber with a key rate of 4.8 bit/s and an error rate of 9.1%, with which we can distill secure keys with a key rate of 0.15 bit/s.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
PDC bits: What`s needed to meet tomorrow`s challenge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, T.M.; Sinor, L.A.
1994-12-31
When polycrystalline diamond compact (PDC) bits were introduced in the mid-1970s they showed tantalizingly high penetration rates in laboratory drilling tests. Single cutter tests indicated that they had the potential to drill very hard rocks. Unfortunately, 20 years later we`re still striving to reach the potential that these bits seem to have. Many problems have been overcome, and PDC bits have offered capabilities not possible with roller cone bits. PDC bits provide the most economical bit choice in many areas, but their limited durability has hampered their application in many other areas.
Efficient and universal quantum key distribution based on chaos and middleware
NASA Astrophysics Data System (ADS)
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-01-01
Quantum key distribution (QKD) promises unconditionally secure communications, however, the low bit rate of QKD cannot meet the requirements of high-speed applications. Despite the many solutions that have been proposed in recent years, they are neither efficient to generate the secret keys nor compatible with other QKD systems. This paper, based on chaotic cryptography and middleware technology, proposes an efficient and universal QKD protocol that can be directly deployed on top of any existing QKD system without modifying the underlying QKD protocol and optical platform. It initially takes the bit string generated by the QKD system as input, periodically updates the chaotic system, and efficiently outputs the bit sequences. Theoretical analysis and simulation results demonstrate that our protocol can efficiently increase the bit rate of the QKD system as well as securely generate bit sequences with perfect statistical properties. Compared with the existing methods, our protocol is more efficient and universal, it can be rapidly deployed on the QKD system to increase the bit rate when the QKD system becomes the bottleneck of its communication system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2005-09-30
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less
Yoo, Sun K; Kim, D K; Jung, S M; Kim, E-K; Lim, J S; Kim, J H
2004-01-01
A Web-based, realtime, tele-ultrasound consultation system was designed. The system employed ActiveX control, MPEG-4 coding of full-resolution ultrasound video (640 x 480 pixels at 30 frames/s) and H.320 videoconferencing. It could be used via a Web browser. The system was evaluated over three types of commercial line: a cable connection, ADSL and VDSL. Three radiologists assessed the quality of compressed and uncompressed ultrasound video-sequences from 16 cases (10 abnormal livers, four abnormal kidneys and two abnormal gallbladders). The radiologists' scores showed that, at a given frame rate, increasing the bit rate was associated with increasing quality; however, at a certain threshold bit rate the quality did not increase significantly. The peak signal to noise ratio (PSNR) was also measured between the compressed and uncompressed images. In most cases, the PSNR increased as the bit rate increased, and increased as the number of dropped frames increased. There was a threshold bit rate, at a given frame rate, at which the PSNR did not improve significantly. Taking into account both sets of threshold values, a bit rate of more than 0.6 Mbit/s, at 30 frames/s, is suggested as the threshold for the maintenance of diagnostic image quality.
All of the WASP Installers are listed below. There is a 64 Bit Windows Installer, 64 Bit Mac OS X (Yosemite or Higher), 64 Bit Linux (Built on Ubuntu). You will need to have knowledge on how to install software on your target operating system.
Testability Design Rating System: Testability Handbook. Volume 1
1992-02-01
4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Photonic-Assisted mm-Wave and THz Wireless Transmission towards 100 Gbit/s Data Rate
NASA Astrophysics Data System (ADS)
Freire Hermelo, Maria; Chuenchom, Rattana; Rymanov, Vitaly; Kaiser, Thomas; Sheikh, Fawad; Czylwik, Andreas; Stöhr, Andreas
2017-09-01
This paper presents photonic-assisted 60 GHz mm-wave and 325 GHz system approaches that enable the transmission of spectral-efficient and high data rate signals over fiber and over air. First, we focus on generic channel characteristics within the mm-wave 60 GHz band and at the terahertz (THz) band around 325 GHz. Next, for generating the high data rate baseband signals, we present a technical solution for constructing an extreme bandwidth arbitrary waveform generator (AWG). We then report the development of a novel coherent photonic mixer (CPX) module for direct optic-to-RF conversion of extreme wideband optical signals, with a>5 dB higher conversion gain compared to conventional photodiodes. Finally, we experimentally demonstrate record spectral efficient wireless transmission for both bands. The achieved spectral efficiencies reach 10 bit/s/Hz for the 60 GHz band and 6 bit/s/Hz for the 325 GHz band. The maximum data rate transmitted at THz frequencies in the 325 GHz band is 59 Gbit/s using a 64-QAM-OFDM modulation format and a 10 GHz wide data signal.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Multiple speed expandable bit synchronizer
NASA Technical Reports Server (NTRS)
Bundinger, J. M.
1979-01-01
A multiple speed bit synchronizer was designed for installation in an inertial navigation system data decoder to extract non-return-to-zero level data and clock signal from biphase level data. The circuit automatically senses one of four pre-determined biphase data rates and synchronizes the proper clock rate to the data. Through a simple expansion of the basic design, synchronization of more than four binarily related data rates can be accomplished. The design provides an easily adaptable, low cost, low power alternative to external bit synchronizers with additional savings in size and weight.
The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA
2009-10-01
In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
Least reliable bits coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Wagner, Paul
1992-01-01
LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.
A New Interpretation of the Shannon Entropy Measure
2015-01-01
Organisation PO Box 1500 Edinburgh South Australia 5111 Australia Telephone: 1300 333 363 Fax: (08) 7389 6567 © Commonwealth of Australia ...0 0 0 00 ( 000 ) (001) (101)(100)(011)(010) (111)(110) 1 bit 2 bits 3 bits m1 m6m5m4m3m2 m7 m8 Symbolic Identifiers Probabilistic messages, states, or...for higher-order and hybrid uncertainty forms which occur across many application domains, e.g. [ 25 ]. As a preliminary step towards developing higher
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
NASA Technical Reports Server (NTRS)
Davarian, F.
1994-01-01
The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
Adaptive P300 based control system
Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa
2015-01-01
An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e., 12 columns and 7 rows). The 9- and 14-flash A & B paradigms present all items of the 12 × 7 matrix three times using either nine or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing interference from items adjacent to targets. 14-flash A also reduced adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that accuracy and bit rate of the adaptive system were higher than the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naïve users. PMID:21474877
Effects of size on three-cone bit performance in laboratory drilled shale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, A.D.; DiBona, B.G.; Sandstrom, J.L.
1982-09-01
The effects of size on the performance of 3-cone bits were measured during laboratory drilling tests in shale at simulated downhole conditions. Four Reed HP-SM 3-cone bits with diameters of 6 1/2, 7 7/8, 9 1/2 and 11 inches were used to drill Mancos shale with water-based mud. The tests were conducted at constant borehole pressure, two conditions of hydraulic horsepower per square inch of bit area, three conditions of rotary speed and four conditions of weight-on-bit per inch of bit diameter. The resulting penetration rates and torques were measured. Statistical techniques were used to analyze the data.
Space shuttle data handling and communications considerations.
NASA Technical Reports Server (NTRS)
Stoker, C. J.; Minor, R. G.
1971-01-01
Operational and development flight instrumentation, data handling subsystems and communication requirements of the space shuttle orbiter are discussed. Emphasis is made on data gathering methods, crew display data, computer processing, recording, and telemetry by means of a digital data bus. Also considered are overall communication conceptual system aspects and design features allowing a proper specification of telemetry encoders and instrumentation recorders. An adaptive bit rate concept is proposed to handle the telemetry bit rates which vary with the amount of operational and experimental data to be transmitted. A split-phase encoding technique is proposed for telemetry to cope with the excessive bit jitter and low bit transition density which may affect television performance.
Single photon quantum cryptography.
Beveratos, Alexios; Brouri, Rosa; Gacoin, Thierry; Villing, André; Poizat, Jean-Philippe; Grangier, Philippe
2002-10-28
We report the full implementation of a quantum cryptography protocol using a stream of single photon pulses generated by a stable and efficient source operating at room temperature. The single photon pulses are emitted on demand by a single nitrogen-vacancy color center in a diamond nanocrystal. The quantum bit error rate is less that 4.6% and the secure bit rate is 7700 bits/s. The overall performances of our system reaches a domain where single photons have a measurable advantage over an equivalent system based on attenuated light pulses.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Bissias, George; Levine, Brian; Liberatore, Marc; Lynn, Brian; Moore, Juston; Wallach, Hanna; Wolak, Janis
2016-02-01
We provide detailed measurement of the illegal trade in child exploitation material (CEM, also known as child pornography) from mid-2011 through 2014 on five popular peer-to-peer (P2P) file sharing networks. We characterize several observations: counts of peers trafficking in CEM; the proportion of arrested traffickers that were identified during the investigation as committing contact sexual offenses against children; trends in the trafficking of sexual images of sadistic acts and infants or toddlers; the relationship between such content and contact offenders; and survival rates of CEM. In the 5 P2P networks we examined, we estimate there were recently about 840,000 unique installations per month of P2P programs sharing CEM worldwide. We estimate that about 3 in 10,000 Internet users worldwide were sharing CEM in a given month; rates vary per country. We found an overall month-to-month decline in trafficking of CEM during our study. By surveying law enforcement we determined that 9.5% of persons arrested for P2P-based CEM trafficking on the studied networks were identified during the investigation as having sexually offended against children offline. Rates per network varied, ranging from 8% of arrests for CEM trafficking on Gnutella to 21% on BitTorrent. Within BitTorrent, where law enforcement applied their own measure of content severity, the rate of contact offenses among peers sharing the most-severe CEM (29%) was higher than those sharing the least-severe CEM (15%). Although the persistence of CEM on the networks varied, it generally survived for long periods of time; e.g., BitTorrent CEM had a survival rate near 100%. Copyright © 2015 Elsevier Ltd. All rights reserved.
An efficient CU partition algorithm for HEVC based on improved Sobel operator
NASA Astrophysics Data System (ADS)
Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng
2018-04-01
As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.
Quadrature-quadrature phase-shift keying
NASA Astrophysics Data System (ADS)
Saha, Debabrata; Birdsall, Theodore G.
1989-05-01
Quadrature-quadrature phase-shift keying (Q2PSK) is a spectrally efficient modulation scheme which utilizes available signal space dimensions in a more efficient way than two-dimensional schemes such as QPSK and MSK (minimum-shift keying). It uses two data shaping pulses and two carriers, which are pairwise quadrature in phase, to create a four-dimensional signal space and increases the transmission rate by a factor of two over QPSK and MSK. However, the bit error rate performance depends on the choice of pulse pair. With simple sinusoidal and cosinusoidal data pulses, the Eb/N0 requirement for Pb(E) = 10 to the -5 is approximately 1.6 dB higher than that of MSK. Without additional constraints, Q2PSK does not maintain constant envelope. However, a simple block coding provides a constant envelope. This coded signal substantially outperforms MSKS and TFM (time-frequency multiplexing) in bandwidth efficiency. Like MSK, Q2PSK also has self-clocking and self-synchronizing ability. An optimum class of pulse shapes for use in Q2PSK-format is presented. One suboptimum realization achieves the Nyquist rate of 2 bits/s/Hz using binary detection.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
NASA Astrophysics Data System (ADS)
Perez, Santiago; Karakus, Murat; Pellet, Frederic
2017-05-01
The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.
A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).
Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong
2017-01-01
SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.
New PDC bit optimizes drilling performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besson, A.; Gudulec, P. le; Delwiche, R.
1996-05-01
The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less
Ka-Band, Multi-Gigabit-Per-Second Transceiver
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Wintucky, Edwin G.; Smith, Francis J.; Harris, Johnny M.; Landon, David G.; Haddadin, Osama S.; McIntire, William K.; Sun, June Y.
2011-01-01
A document discusses a multi-Gigabit-per-second, Ka-band transceiver with a software-defined modem (SDM) capable of digitally encoding/decoding data and compensating for linear and nonlinear distortions in the end-to-end system, including the traveling-wave tube amplifier (TWTA). This innovation can increase data rates of space-to-ground communication links, and has potential application to NASA s future spacebased Earth observation system. The SDM incorporates an extended version of the industry-standard DVB-S2, and LDPC rate 9/10 FEC codec. The SDM supports a suite of waveforms, including QPSK, 8-PSK, 16-APSK, 32- APSK, 64-APSK, and 128-QAM. The Ka-band and TWTA deliver an output power on the order of 200 W with efficiency greater than 60%, and a passband of at least 3 GHz. The modem and the TWTA together enable a data rate of 20 Gbps with a low bit error rate (BER). The payload data rates for spacecraft in NASA s integrated space communications network can be increased by an order of magnitude (>10 ) over current state-of-practice. This innovation enhances the data rate by using bandwidth-efficient modulation techniques, which transmit a higher number of bits per Hertz of bandwidth than the currently used quadrature phase shift keying (QPSK) waveforms.
Tactile information transfer: A comparison of two stimulation sites
NASA Astrophysics Data System (ADS)
Summers, Ian R.; Whybrow, Jon J.; Gratton, Denise A.; Milnes, Peter; Brown, Brian H.; Stevens, John C.
2005-10-01
Two experiments on the discrimination of time-varying tactile stimuli were performed, with comparison of stimulus delivery to the distal pad of the right index finger and to the right wrist (palmar surface). Subjects were required to perceive differences in short sequences of computer-generated stimulus elements (experiment 1) or differences in short tactile stimuli derived from a speech signal (experiment 2). The pulse-train stimuli were distinguished by differences in frequency (i.e., pulse repetition rate) and amplitude, and by the presence/absence of gaps (~100-ms duration). Stimulation levels were 10 dB higher at the wrist than at the fingertip, to compensate for the lower vibration sensitivity at the wrist. Results indicate similar gap detection at wrist and fingertip and similar perception of frequency differences. However, perception of amplitude differences was found to be better at the wrist than at the fingertip. Maximum information transfer rates for the stimuli in experiment 1 were estimated at 7 bits s-1 at the wrist and 5 bits s-1 at the fingertip.
Blue phase-change recording at high data densities and data rates
NASA Astrophysics Data System (ADS)
Dekker, Martijn K.; Pfeffer, Nicola; Kuijper, Maarten; Ubbens, Igolt P.; Coene, Wim M. J.; Meinders, E. R.; Borg, Herman J.
2000-09-01
For the DVR system with the use of a blue laser diode (wavelength 405 nm) we developed (12 cm) discs with a total capacity of 22.4 GB. The land/groove track pitch is 0.30 micrometers and the channel bit length is 87 nm. The DVR system uses a d equals 1 code. These phase change discs can be recorded at continuous angular velocity at a maximum of 50 Mbps user data rate (including all format and ECC overhead) and meet the system specifications. Fast growth determined phase change materials (FGM) are used for the active layer. In order to apply these FGM discs at small track pitch special attention has been paid to the issue of thermal cross-write. Finally routes towards higher capacities such as advanced bit detection schemes and the use of a smaller track pitch are considered. These show the feasibility in the near future of at least 26.0 GB on a disc for the DVR system with a blue laser diode.
Tactile information transfer: a comparison of two stimulation sites.
Summers, lan R; Whybrow, Jon J; Gratton, Denise A; Milnes, Peter; Brown, Brian H; Stevens, John C
2005-10-01
Two experiments on the discrimination of time-varying tactile stimuli were performed, with comparison of stimulus delivery to the distal pad of the right index finger and to the right wrist (palmar surface). Subjects were required to perceive differences in short sequences of computer-generated stimulus elements (experiment 1) or differences in short tactile stimuli derived from a speech signal (experiment 2). The pulse-train stimuli were distinguished by differences in frequency (i.e., pulse repetition rate) and amplitude, and by the presence/absence of gaps (approximately 100-ms duration). Stimulation levels were 10 dB higher at the wrist than at the fingertip, to compensate for the lower vibration sensitivity at the wrist. Results indicate similar gap detection at wrist and fingertip and similar perception of frequency differences. However, perception of amplitude differences was found to be better at the wrist than at the fingertip. Maximum information transfer rates for the stimuli in experiment 1 were estimated at 7 bits s(-1) at the wrist and 5 bits s(-1) at the fingertip.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
Speech Intelligibility and Prosody Production in Children with Cochlear Implants
Chin, Steven B.; Bergeson, Tonya R.; Phan, Jennifer
2012-01-01
Objectives The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. Methods The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Results Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. Discussion The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. PMID:22717120
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Direct-phase and amplitude digitalization based on free-space interferometry
NASA Astrophysics Data System (ADS)
Kleiner, Vladimir; Rudnitsky, Arkady; Zalevsky, Zeev
2017-12-01
A novel ADC configuration that can be characterized as a photonic-domain flash analog-to-digital convertor operating based upon free-space interferometry is proposed and analysed. The structure can be used as the front-end of a coherent receiver as well as for other applications. Two configurations are considered: the first, ‘direct free-space interference’, allows simultaneous measuring of the optical phase and amplitude; the second, ‘extraction of the ac component of interference by means of pixel-by-pixel balanced photodetection’, allows only phase digitization but with significantly higher sensitivity. For both proposed configurations, we present Monte Carlo estimations of the performance limitations, due to optical noise and photo-current noise, at sampling rates of 60 giga-samples per second. In terms of bit resolution, we simulated multiple cases with growing complexity of up to 4 bits for the amplitude and up to 6 bits for the phase. The simulations show that the digitization errors in the optical domain can be reduced to levels close to the quantization noise limits. Preliminary experimental results validate the fundamentals of the proposed idea.
NASA Astrophysics Data System (ADS)
Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei
2018-03-01
The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.
Some Processing and Dynamic-Range Issues in Side-Scan Sonar Work
NASA Astrophysics Data System (ADS)
Asper, V. L.; Caruthers, J. W.
2007-05-01
Often side-scan sonar data are collected in such a way that they afford little opportunity to do more than simply display them as images. These images are often limited in dynamic range and stored only in an 8-bit tiff format of numbers representing less than true intensity values. Furthermore, there is little prior knowledge during a survey of the best range in which to set those eight bits. This can result in clipped strong targets and/or the depth of shadows so that the bits that can be recovered from the image are not fully representative of target or bottom backscatter strengths. Several top-of-the-line sonars do have a means of logging high-bit-rate digital data (sometimes only as an option), but only dedicated specialists pay much attention to such data, if they record them at all. Most users of side-scan sonars are interested only in the images. Discussed in this paper are issues related to storing and processing of high-bit-rate digital data to preserve their integrity for future enhanced, after- the-fact use and ability to recover actual backscatter strengths. This papers discusses issues in the use high-bit- rate, digital side-scan sonar data. This work was supported by the Office of Naval Research, Code 321OA, and the Naval Oceanographic Office, Mine Warfare Program.
On the Mutual Information of Multi-hop Acoustic Sensors Network in Underwater Wireless Communication
2014-05-01
DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. The University of the District of Columbia Computer Science and Informati Briana Lowe Wellman Washington...financial support throughout my Master’s study and research. Also, I would like to acknowledge the Faculty of the Electrical and Computer Engineering...received bits are in error, and then compute the bit-error-rate as the number of bit errors divided by the total number of bits in the transmitted signal
Townsend, G.; LaPallo, B.K.; Boulay, C.B.; Krusienski, D.J.; Frye, G.E.; Hauser, C.K.; Schwartz, N.E.; Vaughan, T.M.; Wolpaw, J.R.; Sellers, E.W.
2010-01-01
Objective An electroencephalographic brain-computer interface (BCI) can provide a non-muscular means of communication for people with amyotrophic lateral sclerosis (ALS) or other neuromuscular disorders. We present a novel P300-based BCI stimulus presentation – the checkerboard paradigm (CBP). CBP performance is compared to that of the standard row/column paradigm (RCP) introduced by Farwell and Donchin (1988). Methods Using an 8×9 matrix of alphanumeric characters and keyboard commands, 18 participants used the CBP and RCP in counter-balanced fashion. With approximately 9 – 12 minutes of calibration data, we used a stepwise linear discriminant analysis for online classification of subsequent data. Results Mean online accuracy was significantly higher for the CBP, 92%, than for the RCP, 77%. Correcting for extra selections due to errors, mean bit rate was also significantly higher for the CBP, 23 bits/min, than for the RCP, 17 bits/min. Moreover, the two paradigms produced significantly different waveforms. Initial tests with three advanced ALS participants produced similar results. Furthermore, these individuals preferred the CBP to the RCP. Conclusions These results suggest that the CBP is markedly superior to the RCP in performance and user acceptability. Significance The CBP has the potential to provide a substantially more effective BCI than the RCP. This is especially important for people with severe neuromuscular disabilities. PMID:20347387
Bit-Wise Arithmetic Coding For Compression Of Data
NASA Technical Reports Server (NTRS)
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
TerraTek
2007-06-30
A deep drilling research program titled 'An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration' was conducted at TerraTek's Drilling and Completions Laboratory. Drilling tests were run to simulate deep drilling by using high bore pressures and high confining and overburden stresses. The purpose of this testing was to gain insight into practices that would improve rates of penetration and mechanical specific energy while drilling under high pressure conditions. Thirty-seven test series were run utilizing a variety of drilling parameters which allowed analysis of the performance ofmore » drill bits and drilling fluids. Five different drill bit types or styles were tested: four-bladed polycrystalline diamond compact (PDC), 7-bladed PDC in regular and long profile, roller-cone, and impregnated. There were three different rock types used to simulate deep formations: Mancos shale, Carthage marble, and Crab Orchard sandstone. The testing also analyzed various drilling fluids and the extent to which they improved drilling. The PDC drill bits provided the best performance overall. The impregnated and tungsten carbide insert roller-cone drill bits performed poorly under the conditions chosen. The cesium formate drilling fluid outperformed all other drilling muds when drilling in the Carthage marble and Mancos shale with PDC drill bits. The oil base drilling fluid with manganese tetroxide weighting material provided the best performance when drilling the Crab Orchard sandstone.« less
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Invariance of the bit error rate in the ancilla-assisted homodyne detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide
2010-11-15
We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less
A compact presentation of DSN array telemetry performance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1982-01-01
The telemetry performance of an arrayed receiver system, including radio losses, is often given by a family of curves giving bit error rate vs bit SNR, with tracking loop SNR at one receiver held constant along each curve. This study shows how to process this information into a more compact, useful format in which the minimal total signal power and optimal carrier suppression, for a given fixed bit error rate, are plotted vs data rate. Examples for baseband-only combining are given. When appropriate dimensionless variables are used for plotting, receiver arrays with different numbers of antennas and different threshold tracking loop bandwidths look much alike, and a universal curve for optimal carrier suppression emerges.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)
Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong
2017-01-01
SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2004-10-01
The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for themore » high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.« less
Scalable modulation technology and the tradeoff of reach, spectral efficiency, and complexity
NASA Astrophysics Data System (ADS)
Bosco, Gabriella; Pilori, Dario; Poggiolini, Pierluigi; Carena, Andrea; Guiomar, Fernando
2017-01-01
Bandwidth and capacity demand in metro, regional, and long-haul networks is increasing at several tens of percent per year, driven by video streaming, cloud computing, social media and mobile applications. To sustain this traffic growth, an upgrade of the widely deployed 100-Gbit/s long-haul optical systems, based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) modulation format associated with coherent detection and digital signal processing (DSP), is mandatory. In fact, optical transport techniques enabling a per-channel bit rate beyond 100 Gbit/s have recently been the object of intensive R and D activities, aimed at both improving the spectral efficiency and lowering the cost per bit in fiber transmission systems. In this invited contribution, we review the different available options to scale the per-channel bit-rate to 400 Gbit/s and beyond, i.e. symbol-rate increase, use of higher-order quadrature amplitude modulation (QAM) modulation formats and use of super-channels with DSP-enabled spectral shaping and advanced multiplexing technologies. In this analysis, trade-offs of system reach, spectral efficiency and transceiver complexity are addressed. Besides scalability, next generation optical networks will require a high degree of flexibility in the transponders, which should be able to dynamically adapt the transmission rate and bandwidth occupancy to the light path characteristics. In order to increase the flexibility of these transponders (often referred to as "flexponders"), several advanced modulation techniques have recently been proposed, among which sub-carrier multiplexing, hybrid formats (over time, frequency and polarization), and constellation shaping. We review these techniques, highlighting their limits and potential in terms of performance, complexity and flexibility.
Coding gains and error rates from the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.
SSVEP-BCI implementation for 37-40 Hz frequency range.
Müller, Sandra Mara Torres; Diez, Pablo F; Bastos-Filho, Teodiano Freire; Sarcinelli-Filho, Mário; Mut, Vicente; Laciar, Eric
2011-01-01
This work presents a Brain-Computer Interface (BCI) based on Steady State Visual Evoked Potentials (SSVEP), using higher stimulus frequencies (>30 Hz). Using a statistical test and a decision tree, the real-time EEG registers of six volunteers are analyzed, with the classification result updated each second. The BCI developed does not need any kind of settings or adjustments, which makes it more general. Offline results are presented, which corresponds to a correct classification rate of up to 99% and a Information Transfer Rate (ITR) of up to 114.2 bits/min.
Steganography on quantum pixel images using Shannon entropy
NASA Astrophysics Data System (ADS)
Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.
2016-07-01
This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.
PDC Bit Testing at Sandia Reveals Influence of Chatter in Hard-Rock Drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
RAYMOND,DAVID W.
1999-10-14
Polycrystalline diamond compact (PDC) bits have yet to be routinely applied to drilling the hard-rock formations characteristic of geothermal reservoirs. Most geothermal production wells are currently drilled with tungsten-carbide-insert roller-cone bits. PDC bits have significantly improved penetration rates and bit life beyond roller-cone bits in the oil and gas industry where soft to medium-hard rock types are encountered. If PDC bits could be used to double current penetration rates in hard rock geothermal well-drilling costs could be reduced by 15 percent or more. PDC bits exhibit reasonable life in hard-rock wear testing using the relatively rigid setups typical of laboratorymore » testing. Unfortunately, field experience indicates otherwise. The prevailing mode of failure encountered by PDC bits returning from hard-rock formations in the field is catastrophic, presumably due to impact loading. These failures usually occur in advance of any appreciable wear that might dictate cutter replacement. Self-induced bit vibration, or ''chatter'', is one of the mechanisms that may be responsible for impact damage to PDC cutters in hard-rock drilling. Chatter is more severe in hard-rock formations since they induce significant dynamic loading on the cutter elements. Chatter is a phenomenon whereby the drillstring becomes dynamically unstable and excessive sustained vibrations occur. Unlike forced vibration, the force (i.e., weight on bit) that drives self-induced vibration is coupled with the response it produces. Many of the chatter principles derived in the machine tool industry are applicable to drilling. It is a simple matter to make changes to a machine tool to study the chatter phenomenon. This is not the case with drilling. Chatter occurs in field drilling due to the flexibility of the drillstring. Hence, laboratory setups must be made compliant to observe chatter.« less
NASA Astrophysics Data System (ADS)
R. Horche, Paloma; del Rio Campos, Carmina
2004-10-01
The proliferation of high-bandwidth applications has created a growing interest in upgrading networks to deliver broadband services to homes and small businesses between network providers. There has to be a great efficiency between the total cost of the infrastructures and the services that can be offered to the end users. Coarse Wavelength Division Multiplexing (CWDM) is an ideal solution to the tradeoff between cost and capacity. This technology uses all or part of the 1270 to 1610 nm wavelength fiber range with optical channel separation about 20 nm. The problem in CWDM systems is that for a given reach the performance is not equal for all of transmitted channels because of the very different fiber attenuation and dispersion characteristics for each channel. In this work, by means of an Optical Communication System Design Software, we study a CWDM network configuration, for lengths of up to 100 km, in order to achieve low Bit Error Rate (BER) performance for all optical channels. We show that the type of fiber used will have an impact on both the performance of the systems and on the bit rate of each optical channel. In the study, we use both on the already laid and widely deployed singlemode ITU-T G.652 optical fibers and on the latest "water-peak-suppressed" versions of the same fiber as well as G.655 fibers. We have used two types of DML. One is strongly adiabatic chirp dominated and another is strongly transient chirp dominated. The analysis has demonstrated that all the studied fibers have a similar performance when laser strongly adiabatic chirp dominated is used for lengths of up to 40 Km and that fibers with negative sign of dispersion has a higher performance for long distance, at high bit rates and throughout the spectral range analyzed. An important contribution of this work is that it has demonstrated that when DML are used it produces a dispersion accommodation that is function of the fiber length, wavelength and bit rate. This could put in danger the quality of a system CWDM if it is not designed carefully.
Efficient bit sifting scheme of post-processing in quantum key distribution
NASA Astrophysics Data System (ADS)
Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong
2015-10-01
Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.
Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator
NASA Technical Reports Server (NTRS)
Ranadive, V.; Sheridan, T. B.
1981-01-01
The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.
Note: optical receiver system for 152-channel magnetoencephalography.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2014-11-01
An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Audiovisual signal compression: the 64/P codecs
NASA Astrophysics Data System (ADS)
Jayant, Nikil S.
1996-02-01
Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality even in the very best of these systems. In a related part of our talk, we discuss the role of preprocessing and postprocessing subsystems which serve to enhance the performance of an otherwise standard codec. Examples of these (sometimes proprietary) subsystems are automatic face-tracking prior to the coding of a head-and-shoulders scene, and adaptive postfiltering after conventional decoding, to reduce generic classes of artifacts in low bit rate video. The talk concludes with a summary of technology targets and research directions. We discuss targets in terms of four fundamental parameters of coder performance: quality, bit rate, delay and complexity; and we emphasize the need for measuring and maximizing the composite quality of the audiovisual signal. In discussing research directions, we examine progress and opportunities in two fundamental approaches for bit rate reduction: removal of statistical redundancy and reduction of perceptual irrelevancy; we speculate on the value of techniques such as analysis-by-synthesis that have proved to be quite valuable in speech coding, and we examine the prospect of integrating speech and image processing for developing next-generation technology for audiovisual communications.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
NASA Technical Reports Server (NTRS)
Carts, M. A.; Marshall, P. W.; Reed, R.; Curie, S.; Randall, B.; LaBel, K.; Gilbert, B.; Daniel, E.
2006-01-01
Serial Bit Error Rate Testing under radiation to characterize single particle induced errors in high-speed IC technologies generally involves specialized test equipment common to the telecommunications industry. As bit rates increase, testing is complicated by the rapidly increasing cost of equipment able to test at-speed. Furthermore as rates extend into the tens of billions of bits per second test equipment ceases to be broadband, a distinct disadvantage for exploring SEE mechanisms in the target technologies. In this presentation the authors detail the testing accomplished in the CREST project and apply the knowledge gained to establish a set of guidelines suitable for designing arbitrarily high speed radiation effects tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2003-10-01
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.« less
A novel image encryption algorithm using chaos and reversible cellular automata
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luan, Dapeng
2013-11-01
In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.
Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)
NASA Technical Reports Server (NTRS)
Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.
2003-01-01
Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.
High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)
Baumbaugh, Alan E.; Knickerbocker, Kelly L.
1989-01-01
A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
NASA Technical Reports Server (NTRS)
Gunawardena, J. A.
1992-01-01
This cache mechanism is transparent but does not contain associative circuits. It does not rely on locality of reference of instructions or data. No redundant instructions or data are encached. Items in the cache are accessed without address arithmetic. A cache miss is detected by the simplest test; compare two bits. These features would result in faster access, higher hit rate, reduced chip area, and less power dissipation in comparison with associative systems of similar size.
High Temperature 300°C Directional Drilling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Kamalesh; Aaron, Dick; Macpherson, John
2015-07-31
Many countries around the world, including the USA, have untapped geothermal energy potential. Enhanced Geothermal Systems (EGS) technology is needed to economically utilize this resource. Temperatures in some EGS reservoirs can exceed 300°C. To effectively utilize EGS resources, an array of injector and production wells must be accurately placed in the formation fracture network. This requires a high temperature directional drilling system. Most commercial services for directional drilling systems are rated for 175°C while geothermal wells require operation at much higher temperatures. Two U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) projects have been initiated to develop a 300°Cmore » capable directional drilling system, the first developing a drill bit, directional motor, and drilling fluid, and the second adding navigation and telemetry systems. This report is for the first project, “High Temperature 300°C Directional Drilling System, including drill bit, directional motor and drilling fluid, for enhanced geothermal systems,” award number DE-EE0002782. The drilling system consists of a drill bit, a directional motor, and drilling fluid. The DOE deliverables are three prototype drilling systems. We have developed three drilling motors; we have developed four roller-cone and five Kymera® bits; and finally, we have developed a 300°C stable drilling fluid, along with a lubricant additive for the metal-to-metal motor. Metal-to-metal directional motors require coatings to the rotor and stator for wear and corrosion resistance, and this coating research has been a significant part of the project. The drill bits performed well in the drill bit simulator test, and the complete drilling system has been tested drilling granite at Baker Hughes’ Experimental Test Facility in Oklahoma. The metal-to-metal motor was additionally subjected to a flow loop test in Baker Hughes’ Celle Technology Center in Germany, where it ran for more than 100 hours.« less
Security proof of a three-state quantum-key-distribution protocol without rotational symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fung, C.-H.F.; Lo, H.-K.
2006-10-15
Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states |0{sub z}> and |1{sub z}> can contribute to key generation, and the third state |+>=(|0{sub z}>+|1{sub z}>)/{radical}(2) is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that thesemore » QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be; Tchitnga, Robert; Woafo, Paul
We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bitmore » rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.« less
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Effects of pre-conditioning on behavior and physiology of horses during a standardised learning task
Webb, Holly; Starling, Melissa J.; Freire, Rafael; Buckley, Petra; McGreevy, Paul D.
2017-01-01
Rein tension is used to apply pressure to control both ridden and unridden horses. The pressure is delivered by equipment such as the bit, which may restrict voluntary movement and cause changes in behavior and physiology. Managing the effects of such pressure on arousal level and behavioral indicators will optimise horse learning outcomes. This study examined the effect of training horses to turn away from bit pressure on cardiac outcomes and behavior (including responsiveness) over the course of eight trials in a standardised learning task. The experimental procedure consisted of a resting phase, treatment/control phase, standardised learning trials requiring the horses (n = 68) to step backwards in response to bit pressure and a recovery phase. As expected, heart rate increased (P = 0.028) when the handler applied rein tension during the treatment phase. The amount of rein tension required to elicit a response during treatment was higher on the left than the right rein (P = 0.009). Total rein tension required for trials reduced (P < 0.001) as they progressed, as did time taken (P < 0.001) and steps taken (P < 0.001). The incidence of head tossing decreased (P = 0.015) with the progression of the trials and was higher (P = 0.018) for the control horses than the treated horses. These results suggest that preparing the horses for the lesson and slightly raising their arousal levels, improved learning outcomes. PMID:28358892
Uokawa, Y; Yonezawa, Y; Caldwell, W M; Hahn, A W
2000-01-01
A data acquisition system employing a low power 8 bit microcomputer has been developed for heart rate variability monitoring before, during and after bathing. The system consists of three integral chest electrodes, two temperature sensors, an instrumentation amplifier, a low power 8-bit single chip microcomputer (SMC) and a 4 MB compact flash memory (CFM). The ECG from the electrodes is converted to an 8-bit digital format at a 1 ms rate by an A/D converter in the SMC. Both signals from the body and ambient temperature sensors are converted to an 8-bit digital format every 1 second. These data are stored by the CFM. The system is powered by a rechargeable 3.6 V lithium battery. The 4 x 11 x 1 cm system is encapsulated in epoxy and silicone, yielding a total volume of 44 cc. The weight is 100 g.
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.
1991-01-01
A software application to assist end-users of the link evaluation terminal (LET) for satellite communications is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving (220/110 Mbps) capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. The HBR LET can determine the bit error rate (BER) under various atmospheric conditions by comparing the transmitted bit pattern with the received bit pattern. An algorithm for power augmentation will be applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions.
A wide bandwidth CCD buffer memory system
NASA Technical Reports Server (NTRS)
Siemens, K.; Wallace, R. W.; Robinson, C. R.
1978-01-01
A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.
Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang
2010-08-16
Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
Single and Multi-Pulse Low-Energy Conical Theta Pinch Inductive Pulsed Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Martin, Adam; Polzin, Kurt; Kimberlin, Adam; Eskridge, Richard
2013-01-01
Fabricated and tested CTP IPPTs at cone angles of 20deg, 38deg, and 60deg, and performed direct single-pulse impulse bit measurements with continuous gas flow. Single pulse performance highest for 38deg angle with impulse bit of approx.1 mN-s for both argon and xenon. Estimated efficiencies low, but not unexpectedly so based on historical data trends and the direction of the force vector in the CTP. Capacitor charging system assembled to provide rapid recharging of capacitor bank, permitting repetition-rate operation. IPPT operated at repetition-rate of 5 Hz, at maximum average power of 2.5 kW, representing to our knowledge the highest average power for a repetitively-pulsed thruster. Average thrust in repetition-rate mode (at 5 kV, 75 sccm argon) was greater than simply multiplying the single-pulse impulse bit and the repetition rate.
High Rate Digital Demodulator ASIC
NASA Technical Reports Server (NTRS)
Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew
1998-01-01
The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.
Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications
2010-09-01
four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.
2011-03-01
We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
A Dynamic Model for C3 Information Incorporating the Effects of Counter C3
1980-12-01
birth and death rates exactly cancel one another and H = 0. Although this simple first order linear system is not very sophisti- cated, we see...per hour and refer to the average behavior of the entire system ensemble much as species birth and death rates are typically measured in births (or...unit time) iii) VTX, VIY ; Uncertainty Death Rates resulting from data inputs (bits/bit per unit time) 3 -1 iv) YYV» YvY > Counter C
A bandwidth efficient coding scheme for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J., Jr.
1991-01-01
As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.
Dimitriadis, Stavros I; Marimpis, Avraam D
2018-01-01
A brain-computer interface (BCI) is a channel of communication that transforms brain activity into specific commands for manipulating a personal computer or other home or electrical devices. In other words, a BCI is an alternative way of interacting with the environment by using brain activity instead of muscles and nerves. For that reason, BCI systems are of high clinical value for targeted populations suffering from neurological disorders. In this paper, we present a new processing approach in three publicly available BCI data sets: (a) a well-known multi-class ( N = 6) coded-modulated Visual Evoked potential (c-VEP)-based BCI system for able-bodied and disabled subjects; (b) a multi-class ( N = 32) c-VEP with slow and fast stimulus representation; and (c) a steady-state Visual Evoked potential (SSVEP) multi-class ( N = 5) flickering BCI system. Estimating cross-frequency coupling (CFC) and namely δ-θ [δ: (0.5-4 Hz), θ: (4-8 Hz)] phase-to-amplitude coupling (PAC) within sensor and across experimental time, we succeeded in achieving high classification accuracy and Information Transfer Rates (ITR) in the three data sets. Our approach outperformed the originally presented ITR on the three data sets. The bit rates obtained for both the disabled and able-bodied subjects reached the fastest reported level of 324 bits/min with the PAC estimator. Additionally, our approach outperformed alternative signal features such as the relative power (29.73 bits/min) and raw time series analysis (24.93 bits/min) and also the original reported bit rates of 10-25 bits/min . In the second data set, we succeeded in achieving an average ITR of 124.40 ± 11.68 for the slow 60 Hz and an average ITR of 233.99 ± 15.75 for the fast 120 Hz. In the third data set, we succeeded in achieving an average ITR of 106.44 ± 8.94. Current methodology outperforms any previous methodologies applied to each of the three free available BCI datasets.
Low latency counter event indication
Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY
2008-09-16
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
Low latency counter event indication
Gara, Alan G.; Salapura, Valentina
2010-08-24
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
NASA Astrophysics Data System (ADS)
Grois, Dan; Marpe, Detlev; Nguyen, Tung; Hadar, Ofer
2014-09-01
The popularity of low-delay video applications dramatically increased over the last years due to a rising demand for realtime video content (such as video conferencing or video surveillance), and also due to the increasing availability of relatively inexpensive heterogeneous devices (such as smartphones and tablets). To this end, this work presents a comparative assessment of the two latest video coding standards: H.265/MPEG-HEVC (High-Efficiency Video Coding), H.264/MPEG-AVC (Advanced Video Coding), and also of the VP9 proprietary video coding scheme. For evaluating H.264/MPEG-AVC, an open-source x264 encoder was selected, which has a multi-pass encoding mode, similarly to VP9. According to experimental results, which were obtained by using similar low-delay configurations for all three examined representative encoders, it was observed that H.265/MPEG-HEVC provides significant average bit-rate savings of 32.5%, and 40.8%, relative to VP9 and x264 for the 1-pass encoding, and average bit-rate savings of 32.6%, and 42.2% for the 2-pass encoding, respectively. On the other hand, compared to the x264 encoder, typical low-delay encoding times of the VP9 encoder, are about 2,000 times higher for the 1-pass encoding, and are about 400 times higher for the 2-pass encoding.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.
Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W
2014-01-27
We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
An online hybrid BCI system based on SSVEP and EMG
NASA Astrophysics Data System (ADS)
Lin, Ke; Cinetto, Andrea; Wang, Yijun; Chen, Xiaogang; Gao, Shangkai; Gao, Xiaorong
2016-04-01
Objective. A hybrid brain-computer interface (BCI) is a device combined with at least one other communication system that takes advantage of both parts to build a link between humans and machines. To increase the number of targets and the information transfer rate (ITR), electromyogram (EMG) and steady-state visual evoked potential (SSVEP) were combined to implement a hybrid BCI. A multi-choice selection method based on EMG was developed to enhance the system performance. Approach. A 60-target hybrid BCI speller was built in this study. A single trial was divided into two stages: a stimulation stage and an output selection stage. In the stimulation stage, SSVEP and EMG were used together. Every stimulus flickered at its given frequency to elicit SSVEP. All of the stimuli were divided equally into four sections with the same frequency set. The frequency of each stimulus in a section was different. SSVEPs were used to discriminate targets in the same section. Different sections were classified using EMG signals from the forearm. Subjects were asked to make different number of fists according to the target section. Canonical Correlation Analysis (CCA) and mean filtering was used to classify SSVEP and EMG separately. In the output selection stage, the top two optimal choices were given. The first choice with the highest probability of an accurate classification was the default output of the system. Subjects were required to make a fist to select the second choice only if the second choice was correct. Main results. The online results obtained from ten subjects showed that the mean accurate classification rate and ITR were 81.0% and 83.6 bits min-1 respectively only using the first choice selection. The ITR of the hybrid system was significantly higher than the ITR of any of the two single modalities (EMG: 30.7 bits min-1, SSVEP: 60.2 bits min-1). After the addition of the second choice selection and the correction task, the accurate classification rate and ITR was enhanced to 85.8% and 90.9 bit min-1. Significance. These results suggest that the hybrid system proposed here is suitable for practical use.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian
2018-06-01
In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.
Methodology and method and apparatus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2011-01-01
Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Fast and memory efficient text image compression with JBIG2.
Ye, Yan; Cosman, Pamela
2003-01-01
In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
Analog Correlator Based on One Bit Digital Correlator
NASA Technical Reports Server (NTRS)
Prokop, Norman (Inventor); Krasowski, Michael (Inventor)
2017-01-01
A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.
Parallel efficient rate control methods for JPEG 2000
NASA Astrophysics Data System (ADS)
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
High resolution A/D conversion based on piecewise conversion at lower resolution
Terwilliger, Steve [Albuquerque, NM
2012-06-05
Piecewise conversion of an analog input signal is performed utilizing a plurality of relatively lower bit resolution A/D conversions. The results of this piecewise conversion are interpreted to achieve a relatively higher bit resolution A/D conversion without sampling frequency penalty.
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
Hard-rock jetting. Part 2. Rock type decides jetting economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pols, A.C.
1977-02-07
In Part 2, Koninklijke Shell Exploratie en Produktie Laboratorium presents the results of jet-drilling laminated formations. Shell concludes that (1) hard, laminated rock cannot be jet-drilled satisfactorily without additional mechanical cutting aids, (2) the increase in penetration rate with bit-pressure drop is much lower for impermeable rock than it is for permeable rock, (3) drilling mud can have either a positive or a negative effect on penetration rate in comparison with water, depending on the material drilled, and (4) hard, isotropic, sedimentary, impermeable rock can be drilled using jets at higher rates than with conventional means. However, jetting becomes profitablemore » only in the case of expensive rigs.« less
Quantum key distribution in a multi-user network at gigahertz clock rates
NASA Astrophysics Data System (ADS)
Fernandez, Veronica; Gordon, Karen J.; Collins, Robert J.; Townsend, Paul D.; Cova, Sergio D.; Rech, Ivan; Buller, Gerald S.
2005-07-01
In recent years quantum information research has lead to the discovery of a number of remarkable new paradigms for information processing and communication. These developments include quantum cryptography schemes that offer unconditionally secure information transport guaranteed by quantum-mechanical laws. Such potentially disruptive security technologies could be of high strategic and economic value in the future. Two major issues confronting researchers in this field are the transmission range (typically <100km) and the key exchange rate, which can be as low as a few bits per second at long optical fiber distances. This paper describes further research of an approach to significantly enhance the key exchange rate in an optical fiber system at distances in the range of 1-20km. We will present results on a number of application scenarios, including point-to-point links and multi-user networks. Quantum key distribution systems have been developed, which use standard telecommunications optical fiber, and which are capable of operating at clock rates of up to 2GHz. They implement a polarization-encoded version of the B92 protocol and employ vertical-cavity surface-emitting lasers with emission wavelengths of 850 nm as weak coherent light sources, as well as silicon single-photon avalanche diodes as the single photon detectors. The point-to-point quantum key distribution system exhibited a quantum bit error rate of 1.4%, and an estimated net bit rate greater than 100,000 bits-1 for a 4.2 km transmission range.
Strategies for the Evolution of Sex
NASA Astrophysics Data System (ADS)
Erzan, Ayse
2002-03-01
Using a bit-string model of evolution we find a successful route to diploidy and sex in simple organisms, for a step-like fitness function. Assuming that an excess of deleterious mutations triggers the conversion of haploids to diploidy and sex, we find that only one pair of sexual organisms can take over a finite population, if they engage in sexual reproduction under unfavorable conditions, and otherwise perform mitosis. Then, a haploid-diploid (HD) cycle is established, with an abbreviated haploid phase, as in present day sexual reproduction. If crossover is allowed during meiosis, HD cycles of arbitrary duration can be maintained. We find that the sexual population has a higher mortality rate than asexual diploids, but also a relaxation rate that is an order of magnitude higher. As a result, sexuals have a higher adaptability and lower mutational load on the average, since they can select out the undesirable genes much faster.
Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtanowicz, A.K.; Kuru, E.
1993-12-01
An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic; Field, Christopher
1990-01-01
A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aswad, Z.A.R.; Al-Hadad, S.M.S.
1983-03-01
The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
QoS mapping algorithm for ETE QoS provisioning
NASA Astrophysics Data System (ADS)
Wu, Jian J.; Foster, Gerry
2002-08-01
End-to-End (ETE) Quality of Service (QoS) is critical for next generation wireless multimedia communication systems. To meet the ETE QoS requirements, Universal Mobile Telecommunication System (UMTS) requires not only meeting the 3GPP QoS requirements [1-2] but also mapping external network QoS classes to UMTS QoS classes. There are four Quality of Services (QoS) classes in UMTS; they are Conversational, Streaming, Interactive and Background. There are eight QoS classes for LAN in IEEE 802.1 (one reserved). ATM has four QoS categories. They are Constant Bit Rate (CBR) - highest priority, short queue for strict Cell Delay Variation (CDV), Variable Bit Rate (VBR) - second highest priority, short queues for real time, longer queues for non-real time, Guaranteed Frame Rate (GFR)/ Unspecified Bit Rate (UBR) with Minimum Desired Cell Rate (MDCR) - intermediate priority, dependent on service provider UBR/ Available Bit Rate (ABR) - lowest priority, long queues, large delay variation. DiffServ (DS) has six-bit DS codepoint (DSCP) available to determine the datagram's priority relative to other datagrams and therefore, up to 64 QoS classes are available from the IPv4 and IPv6 DSCP. Different organisations have tried to solve the QoS issues from their own perspective. However, none of them has a full picture for end-to-end QoS classes and how to map them among all QoS classes. Therefore, a universal QoS needs to be created and a new set of QoS classes to enable end-to-end (ETE) QoS provisioning is required. In this paper, a new set of ETE QoS classes is proposed and a mappings algorithm for different QoS classes that are proposed by different organisations is given. With our proposal, ETE QoS mapping and control can be implemented.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Generation and transmission of DPSK signals using a directly modulated passive feedback laser.
Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C
2012-12-10
The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.
The transmission of low frequency medical data using delta modulation techniques.
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Dawson, C. T.
1972-01-01
The transmission of low-frequency medical data using delta modulation techniques is described. The delta modulators are used to distribute the low-frequency data into the passband of the telephone lines. Both adaptive and linear delta modulators are considered. Optimum bit rates to minimize distortion and intersymbol interference are discussed. Vibrocardiographic waves are analyzed as a function of bit rate and delta modulator configuration to determine their reproducibility for medical evaluation.
Present state of HDTV coding in Japan and future prospect
NASA Astrophysics Data System (ADS)
Murakami, Hitomi
The development status of HDTV digital codecs in Japan is evaluated; several bit rate-reduction codecs have been developed for 1125 lines/60-field HDTV, and performance trials have been conducted through satellite and optical fiber links. Prospective development efforts will attempt to achieve more efficient coding schemes able to reduce the bit rate to as little as 45 Mbps, as well as to apply coding schemes to automated teller machine networks.
NASA Astrophysics Data System (ADS)
Makouei, Somayeh; Koozekanani, Z. D.
2014-12-01
In this paper, with sophisticated modification on modal-field distribution and introducing new design procedure, the single-mode fiber with ultra-low bending-loss and pseudo-symmetric high bit-rate of uplink and downlink, appropriate for fiber-to-the-home (FTTH) operation is presented. The bending-loss reduction and dispersion management are done by the means of Genetic Algorithm. The remarkable feature of this methodology is designing a bend-insensitive fiber without reduction of core radius and MFD. Simulation results show bending loss of 1.27×10-2 dB/turn at 1.55 μm for 5 mm curvature radius. The MFD and Aeff are 9.03 μm and 59.11 μm2. Moreover, the upstream and downstream bit-rates are approximately 2.38 Gbit/s-km and 3.05 Gbit/s-km.
Transmission of 2.5 Gbit/s Spectrum-sliced WDM System for 50 km Single-mode Fiber
NASA Astrophysics Data System (ADS)
Ahmed, Nasim; Aljunid, Sayed Alwee; Ahmad, R. Badlisha; Fadil, Hilal Adnan; Rashid, Mohd Abdur
2011-06-01
The transmission of a spectrum-sliced WDM channel at 2.5 Gbit/s for 50 km of single mode fiber using an system channel spacing only 0.4 nm is reported. We have investigated the system performance using NRZ modulation format. The proposed system is compared with conventional system. The system performance is characterized as the bit-error-rate (BER) received against the system bit rates. Simulation results show that the NRZ modulation format performs well for 2.5 Gbit/s system bit rates. Using this narrow channel spectrum-sliced technique, the total number of multiplexed channels can be increased greatly in WDM system. Therefore, 0.4 nm channel spacing spectrum-sliced WDM system is highly recommended for the long distance optical access networks, like the Metro Area Network (MAN), Fiber-to-the-Building (FTTB) and Fiber-to-the-Home (FTTH).
Entangled quantum key distribution over two free-space optical links.
Erven, C; Couteau, C; Laflamme, R; Weihs, G
2008-10-13
We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.
Network device interface for digitally interfacing data channels to a controller via a network
NASA Technical Reports Server (NTRS)
Konz, Daniel W. (Inventor); Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Winkelmann, Joseph P. (Inventor)
2006-01-01
The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted into digital signals and transmitted back to the controller. In one embodiment, the bus controller sends commands and data a defined bit rate, and the network device interface senses this bit rate and sends data back to the bus controller using the defined bit rate.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
NASA Astrophysics Data System (ADS)
Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.
2010-04-01
A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.
640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser
NASA Astrophysics Data System (ADS)
Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei
2017-04-01
An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.
A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip
NASA Technical Reports Server (NTRS)
Timoc, C.; Tran, T.; Wongso, J.
1992-01-01
This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.
Real-time implementation of second generation of audio multilevel information coding
NASA Astrophysics Data System (ADS)
Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.
1994-03-01
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
Direct bit detection receiver noise performance analysis for 32-PSK and 64-PSK modulated signals
NASA Astrophysics Data System (ADS)
Ahmed, Iftikhar
1987-12-01
Simple two channel receivers for 32-PSK and 64-PSK modulated signals have been proposed which allow digital data (namely bits), to be recovered directly instead of the traditional approach of symbol detection followed by symbol to bit mappings. This allows for binary rather than M-ary receiver decisions, reduces the amount of signal processing operations and permits parallel recovery of the bits. The noise performance of these receivers quantified by the Bit Error Rate (BER) assuming an Additive White Gaussian Noise interference model is evaluated as a function of Eb/No, the signal to noise ratio, and transmitted phase angles of the signals. The performance results of the direct bit detection receivers (DBDR) when compared to that of convectional phase measurement receivers demonstrate that DBDR's are optimum in BER sense. The simplicity of the receiver implementations and the BER of the delivered data make DBDR's attractive for high speed, spectrally efficient digital communication systems.
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Qi; Wang, Tianyu
2018-04-01
This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.
A new thermal model for bone drilling with applications to orthopaedic surgery.
Lee, JuEun; Rabin, Yoed; Ozdoganlar, O Burak
2011-12-01
This paper presents a new thermal model for bone drilling with applications to orthopaedic surgery. The new model combines a unique heat-balance equation for the system of the drill bit and the chip stream, an ordinary heat diffusion equation for the bone, and heat generation at the drill tip, arising from the cutting process and friction. Modeling of the drill bit-chip stream system assumes an axial temperature distribution and a lumped heat capacity effect in the transverse cross-section. The new model is solved numerically using a tailor-made finite-difference scheme for the drill bit-chip stream system, coupled with a classic finite-difference method for the bone. The theoretical investigation addresses the significance of heat transfer between the drill bit and the bone, heat convection from the drill bit to the surroundings, and the effect of the initial temperature of the drill bit on the developing thermal field. Using the new model, a parametric study on the effects of machining conditions and drill-bit geometries on the resulting temperature field in the bone and the drill bit is presented. Results of this study indicate that: (1) the maximum temperature in the bone decreases with increased chip flow; (2) the transient temperature distribution is strongly influenced by the initial temperature; (3) the continued cooling (irrigation) of the drill bit reduces the maximum temperature even when the tip is distant from the cooled portion of the drill bit; and (4) the maximum temperature increases with increasing spindle speed, increasing feed rate, decreasing drill-bit diameter, increasing point angle, and decreasing helix angle. The model is expected to be useful in determination of optimum drilling conditions and drill-bit geometries. Copyright © 2011. Published by Elsevier Ltd.
Design and testing of coring bits on drilling lunar rock simulant
NASA Astrophysics Data System (ADS)
Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan
2017-02-01
Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
NASA Astrophysics Data System (ADS)
Wang, Yao; Vijaya Kumar, B. V. K.
2017-05-01
The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
Security of six-state quantum key distribution protocol with threshold detectors
Kato, Go; Tamaki, Kiyoshi
2016-01-01
The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Improving TCP Network Performance by Detecting and Reacting to Packet Reordering
NASA Technical Reports Server (NTRS)
Kruse, Hans; Ostermann, Shawn; Allman, Mark
2003-01-01
There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)
NASA Astrophysics Data System (ADS)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Energy-efficient human body communication receiver chipset using wideband signaling scheme.
Song, Seong-Jun; Cho, Namjun; Kim, Sunyoung; Yoo, Hoi-Jun
2007-01-01
This paper presents an energy-efficient wideband signaling receiver for communication channels using the human body as a data transmission medium. The wideband signaling scheme with the direct-coupled interface provides the energy-efficient transmission of multimedia data around the human body. The wideband signaling receiver incorporates with a receiver AFE exploiting wideband symmetric triggering technique and an all-digital CDR circuit with quadratic sampling technique. The AFE operates at 10-Mb/s data rate with input sensitivity of -27dBm and the operational bandwidth of 200-MHz. The CDR recovers clock and data of 2-Mb/s at a bit error rate of 10(-7). The receiver chipset consumes only 5-mW from a 1-V supply, thereby achieving the bit energy of 2.5-nJ/bit.
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
NASA Astrophysics Data System (ADS)
Schmitz, Arne; Schinnenburg, Marc; Gross, James; Aguiar, Ana
For any communication system the Signal-to-Interference-plus-Noise-Ratio of the link is a fundamental metric. Recall (cf. Chapter 9) that the SINR is defined as the ratio between the received power of the signal of interest and the sum of all "disturbing" power sources (i.e. interference and noise). From information theory it is known that a higher SINR increases the maximum possible error-free transmission rate (referred to as Shannon capacity [417] of any communication system and vice versa). Conversely, the higher the SINR, the lower will be the bit error rate in practical systems. While one aspect of the SINR is the sum of all distracting power sources, another issue is the received power. This depends on the transmitted power, the used antennas, possibly on signal processing techniques and ultimately on the channel gain between transmitter and receiver.
Measurement-Device-Independent Quantum Key Distribution over 200 km
NASA Astrophysics Data System (ADS)
Tang, Yan-Lin; Yin, Hua-Lei; Chen, Si-Jing; Liu, Yang; Zhang, Wei-Jun; Jiang, Xiao; Zhang, Lu; Wang, Jian; You, Li-Xing; Guan, Jian-Yu; Yang, Dong-Xu; Wang, Zhen; Liang, Hao; Zhang, Zhen; Zhou, Nan; Ma, Xiongfeng; Chen, Teng-Yun; Zhang, Qiang; Pan, Jian-Wei
2014-11-01
Measurement-device-independent quantum key distribution (MDIQKD) protocol is immune to all attacks on detection and guarantees the information-theoretical security even with imperfect single-photon detectors. Recently, several proof-of-principle demonstrations of MDIQKD have been achieved. Those experiments, although novel, are implemented through limited distance with a key rate less than 0.1 bit /s . Here, by developing a 75 MHz clock rate fully automatic and highly stable system and superconducting nanowire single-photon detectors with detection efficiencies of more than 40%, we extend the secure transmission distance of MDIQKD to 200 km and achieve a secure key rate 3 orders of magnitude higher. These results pave the way towards a quantum network with measurement-device-independent security.
Node synchronization schemes for the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Swanson, L.; Arnold, S.
1992-01-01
The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.
Effects of pore pressure and mud filtration on drilling rates in a permeable sandstone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, A.D.; DiBona, B.; Sandstrom, J.
1983-10-01
During laboratory drilling tests in a permeable sandstone, the effects of pore pressure and mud filtration on penetration rates were measured. Four water-base muds were used to drill four saturated sandstone samples. The drilling tests were conducted at constant borehole pressure with different back pressures maintained on the filtrate flowing from the bottom of the sandstone samples. Bit weight was also varied. Filtration rates were measured while drilling and with the bit off bottom and mud circulating. Penetration rates were found to be related to the difference between the filtration rates measured while drilling and circulating. There was no observedmore » correlation between standard API filtration measurements and penetration rate.« less
Effects of pore pressure and mud filtration on drilling rates in a permeable sandstone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, A.D.; Dearing, H.L.; DiBona, B.G.
1985-09-01
During laboratory drilling tests in a permeable sandstone, the effects of pore pressure and mud filtration on penetration rates were measured. Four water-based muds were used to drill four saturated sandstone samples. The drilling tests were conducted at constant borehole pressure while different backpressures were maintained on the filtrate flowing from the bottom of the sandstone samples. Bit weight was varied also. Filtration rates were measured while circulating mud during drilling and with the bit off bottom. Penetration rates were found to be related qualitatively to the difference between the filtration rates measured while drilling and circulating. There was nomore » observed correlation between standard API filtration measurements and penetration rate.« less
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Digital Signal Processing For Low Bit Rate TV Image Codecs
NASA Astrophysics Data System (ADS)
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Autosophy: an alternative vision for satellite communication, compression, and archiving
NASA Astrophysics Data System (ADS)
Holtz, Klaus; Holtz, Eric; Kalienky, Diana
2006-08-01
Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.
VLSI for High-Speed Digital Signal Processing
1994-09-30
particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal
Quantum cryptography with entangled photons
Jennewein; Simon; Weihs; Weinfurter; Zeilinger
2000-05-15
By realizing a quantum cryptography system based on polarization entangled photon pairs we establish highly secure keys, because a single photon source is approximated and the inherent randomness of quantum measurements is exploited. We implement a novel key distribution scheme using Wigner's inequality to test the security of the quantum channel, and, alternatively, realize a variant of the BB84 protocol. Our system has two completely independent users separated by 360 m, and generates raw keys at rates of 400-800 bits/s with bit error rates around 3%.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos
2007-01-01
Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
Johansson, Anders J
2004-01-01
Modern medical implants are of increasing complexity and with that, the need for fast and flexible communication with them grows. A wireless system is preferable and an inductive link is the most commonly used. But it has the drawback of a very short range, essentially limited to having the external transceiver touching the patient. The Medical Implant Communication System, MICS, is a standard aimed at improving the communication distance. It operates at a higher frequency band between 402 MHz and 405 MHz. We have by simulations and measurements investigated the channel properties of this band and calculated the link performance for a typical implant application. The result is a link speed between a base station and a bedridden patient of 600 kbit bits per second with a bit error rate of 2% in the downlink to the implant and 1 % in the uplink to the base station. Conclusions on the necessary complexity of the base station are also given.
NASA Astrophysics Data System (ADS)
Lange, Christoph; Hülsermann, Ralf; Kosiankowski, Dirk; Geilhardt, Frank; Gladisch, Andreas
2010-01-01
The increasing demand for higher bit rates in access networks requires fiber deployment closer to the subscriber resulting in fiber-to-the-home (FTTH) access networks. Besides higher access bit rates optical access network infrastructure and related technologies enable the network operator to establish larger service areas resulting in a simplified network structure with a lower number of network nodes. By changing the network structure network operators want to benefit from a changed network cost structure by decreasing in short and mid term the upfront investments for network equipment due to concentration effects as well as by reducing the energy costs due to a higher energy efficiency of large network sites housing a high amount of network equipment. In long term also savings in operational expenditures (OpEx) due to the closing of central office (CO) sites are expected. In this paper different architectures for optical access networks basing on state-of-the-art technology are analyzed with respect to network installation costs and power consumption in the context of access node consolidation. Network planning and dimensioning results are calculated for a realistic network scenario of Germany. All node consolidation scenarios are compared against a gigabit capable passive optical network (GPON) based FTTH access network operated from the conventional CO sites. The results show that a moderate reduction of the number of access nodes may be beneficial since in that case the capital expenditures (CapEx) do not rise extraordinarily and savings in OpEx related to the access nodes are expected. The total power consumption does not change significantly with decreasing number of access nodes but clustering effects enable a more energyefficient network operation and optimized power purchase order quantities leading to benefits in energy costs.
Pseudo-color coding method for high-dynamic single-polarization SAR images
NASA Astrophysics Data System (ADS)
Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi
2018-04-01
A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.
Breaking the news on mobile TV: user requirements of a popular mobile content
NASA Astrophysics Data System (ADS)
Knoche, Hendrik O.; Sasse, M. Angela
2006-02-01
This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.
Effect of monitor display on detection of approximal caries lesions in digital radiographs.
Isidor, S; Faaborg-Andersen, M; Hintze, H; Kirkevang, L-L; Frydenberg, M; Haiter-Neto, F; Wenzel, A
2009-12-01
The aim was to compare the accuracy of five flat panel monitors for detection of approximal caries lesions. Five flat panel monitors, Mermaid Ventura (15 inch, colour flat panel, 1024 x 768, 32 bit, analogue), Olórin VistaLine (19 inch, colour, 1280 x 1024, 32 bit, digital), Samsung SyncMaster 203B (20 inch, colour, 1024 x 768, 32 bit, analogue), Totoku ME251i (21 inch, greyscale, 1400 x 1024, 32 bit, digital) and Eizo FlexScan MX190 (19 inch, colour, 1280 x 1024, 32 bit, digital), were assessed. 160 approximal surfaces of human teeth were examined with a storage phosphor plate system (Digora FMX, Soredex) and assessed by seven observers for the presence of caries lesions. Microscopy of the teeth served as validation for the presence/absence of a lesion. The sensitivities varied between observers (range 7-25%) but the variation between the monitors was not large. The Samsung monitor obtained a significantly higher sensitivity than the Mermaid and Olórin monitors (P<0.02) and a lower specificity than the Eizo and Totoku monitors (P<0.05). There were no significant differences between any other monitors. The percentage of correct scores was highest for the Eizo monitor and significantly higher than for the Mermaid and Olórin monitors (P<0.03). There was no clear relationship between the diagnostic accuracy and the resolution or price of the monitor. The Eizo monitor was associated with the overall highest percentage of correct scores. The standard analogue flat panel monitor, Samsung, had higher sensitivity and lower specificity than some of the other monitors, but did not differ in overall accuracy for detection of carious lesions.
NASA Astrophysics Data System (ADS)
Hamidine, Mahamadou; Yuan, Xiuhua
2011-11-01
In this article a numerical simulation is carried out on a single channel optical transmission system with channel bit rate greater than 40 Gb/s to investigate optical signal degradation due to the impact of dispersion and dispersion slope of both transmitting and dispersion compensating fibers. By independently varying the input signal power and the dispersion slope of both transmitting and dispersion compensating fibers of an optical link utilizing a channel bit rate of 86 Gb/s, a good quality factor (Q factor) is obtained with a dispersion slope compensation ratio change of +/-10% for a faithful transmission. With this ratio change a minimum Q factor of 16 dB is obtained in the presence of amplifier noise figure of 5 dB and fiber nonlinearities effects at input signal power of 5 dBm and 3 spans of 100 km standard single mode fiber with a dispersion (D) value of 17 ps/nm.km.
Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression
NASA Astrophysics Data System (ADS)
Daly, Scott J.
1989-08-01
The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
An adaptive P300-based online brain-computer interface.
Lenhardt, Alexander; Kaper, Matthias; Ritter, Helge J
2008-04-01
The P300 component of an event related potential is widely used in conjunction with brain-computer interfaces (BCIs) to translate the subjects intent by mere thoughts into commands to control artificial devices. A well known application is the spelling of words while selection of the letters is carried out by focusing attention to the target letter. In this paper, we present a P300-based online BCI which reaches very competitive performance in terms of information transfer rates. In addition, we propose an online method that optimizes information transfer rates and/or accuracies. This is achieved by an algorithm which dynamically limits the number of subtrial presentations, according to the subject's current online performance in real-time. We present results of two studies based on 19 different healthy subjects in total who participated in our experiments (seven subjects in the first and 12 subjects in the second one). In the first, study peak information transfer rates up to 92 bits/min with an accuracy of 100% were achieved by one subject with a mean of 32 bits/min at about 80% accuracy. The second experiment employed a dynamic classifier which enables the user to optimize bitrates and/or accuracies by limiting the number of subtrial presentations according to the current online performance of the subject. At the fastest setting, mean information transfer rates could be improved to 50.61 bits/min (i.e., 13.13 symbols/min). The most accurate results with 87.5% accuracy showed a transfer rate of 29.35 bits/min.
Petabyte mass memory system using the Newell Opticel(TM)
NASA Technical Reports Server (NTRS)
Newell, Chester W.
1994-01-01
A random access system is proposed for digital storage and retrieval of up to a Petabyte of user data. The system is comprised of stacked memory modules using laser heads writing to an optical medium, in a new shirt-pocket-sized optical storage device called the Opticel. The Opticel described is a completely sealed 'black box' in which an optical medium is accelerated and driven at very high rates to accommodate the desired transfer rates, yet in such a manner that wear is virtually eliminated. It essentially emulates a disk, but with storage area up to several orders of magnitude higher. Access time to the first bit can range from a few milliseconds to a fraction of a second, with time to the last bit within a fraction of a second to a few seconds. The actual times are dependent on the capacity of each Opticel, which ranges from 72 Gigabytes to 1.25 Terabytes. Data transfer rate is limited strictly by the head and electronics, and is 15 Megabits per second in the first version. Independent parallel write/read access to each Opticel is provided using dedicated drives and heads. A Petabyte based on the present Opticel and drive design would occupy 120 cubic feet on a footprint of 45 square feet; with further development, it could occupy as little as 9 cubic feet.
NASA Astrophysics Data System (ADS)
Benkler, Erik; Telle, Harald R.
2007-06-01
An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Single and Multi-Pulse Low-Energy Conical Theta Pinch Inductive Pulsed Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, A. K.; Martin, A. K.; Polzin, K. A.; Kimberlin, A. C.; Eskridge, R. H.
2013-01-01
Impulse bits produced by conical theta-pinch inductive pulsed plasma thrusters possessing cone angles of 20deg, 38deg, and 60deg, were quantified for 500J/pulse operation by direct measurement using a hanging-pendulum thrust stand. All three cone angles were tested in single-pulse mode, with the 38deg model producing the highest impulse bits at roughly 1 mN-s operating on both argon and xenon propellants. A capacitor charging system, assembled to support repetitively-pulsed thruster operation, permitted testing of the 38deg thruster at a repetition-rate of 5 Hz at power levels of 0.9, 1.6, and 2.5 kW. The average thrust measured during multiple-pulse operation exceeded the value obtained when the single-pulse impulse bit is multiplied by the repetition rate.
Inexpensive programmable clock for a 12-bit computer
NASA Technical Reports Server (NTRS)
Vrancik, J. E.
1972-01-01
An inexpensive programmable clock was built for a digital PDP-12 computer. The instruction list includes skip on flag; clear the flag, clear the clock, and stop the clock; and preset the counter with the contents of the accumulator and start the clock. The clock counts at a rate determined by an external oscillator and causes an interrupt and sets a flag when a 12-bit overflow occurs. An overflow can occur after 1 to 4096 counts. The clock can be built for a total parts cost of less than $100 including power supply and I/O connector. Slight modification can be made to permit its use on larger machines (16 bit, 24 bit, etc.) and logic level shifting can be made to make it compatible with any computer.
Servo-integrated patterned media by hybrid directed self-assembly.
Xiao, Shuaigang; Yang, Xiaomin; Steiner, Philip; Hsu, Yautzong; Lee, Kim; Wago, Koichi; Kuo, David
2014-11-25
A hybrid directed self-assembly approach is developed to fabricate unprecedented servo-integrated bit-patterned media templates, by combining sphere-forming block copolymers with 5 teradot/in.(2) resolution capability, nanoimprint and optical lithography with overlay control. Nanoimprint generates prepatterns with different dimensions in the data field and servo field, respectively, and optical lithography controls the selective self-assembly process in either field. Two distinct directed self-assembly techniques, low-topography graphoepitaxy and high-topography graphoepitaxy, are elegantly integrated to create bit-patterned templates with flexible embedded servo information. Spinstand magnetic test at 1 teradot/in.(2) shows a low bit error rate of 10(-2.43), indicating fully functioning bit-patterned media and great potential of this approach for fabricating future ultra-high-density magnetic storage media.
Coherent detection and digital signal processing for fiber optic communications
NASA Astrophysics Data System (ADS)
Ip, Ezra
The drive towards higher spectral efficiency in optical fiber systems has generated renewed interest in coherent detection. We review different detection methods, including noncoherent, differentially coherent, and coherent detection, as well as hybrid detection methods. We compare the modulation methods that are enabled and their respective performances in a linear regime. An important system parameter is the number of degrees of freedom (DOF) utilized in transmission. Polarization-multiplexed quadrature-amplitude modulation maximizes spectral efficiency and power efficiency as it uses all four available DOF contained in the two field quadratures in the two polarizations. Dual-polarization homodyne or heterodyne downconversion are linear processes that can fully recover the received signal field in these four DOF. When downconverted signals are sampled at the Nyquist rate, compensation of transmission impairments can be performed using digital signal processing (DSP). Software based receivers benefit from the robustness of DSP, flexibility in design, and ease of adaptation to time-varying channels. Linear impairments, including chromatic dispersion (CD) and polarization-mode dispersion (PMD), can be compensated quasi-exactly using finite impulse response filters. In practical systems, sampling the received signal at 3/2 times the symbol rate is sufficient to enable an arbitrary amount of CD and PMD to be compensated for a sufficiently long equalizer whose tap length scales linearly with transmission distance. Depending on the transmitted constellation and the target bit error rate, the analog-to-digital converter (ADC) should have around 5 to 6 bits of resolution. Digital coherent receivers are naturally suited for the implementation of feedforward carrier recovery, which has superior linewidth tolerance than phase-locked loops, and does not suffer from feedback delay constraints. Differential bit encoding can be used to prevent catastrophic receiver failure due to cycle slips. In systems where nonlinear effects are concentrated mostly at fiber locations with small accumulated dispersion, nonlinear phase de-rotation is a low-complexity algorithm that can partially mitigate nonlinear effects. For systems with arbitrary dispersion maps, however, backpropagation is the only universal technique that can jointly compensate dispersion and fiber nonlinearity. Backpropagation requires solving the nonlinear Schrodinger equation at the receiver, and has high computational cost. Backpropagation is most effective when dispersion compensation fibers are removed, and when signal processing is performed at three times oversampling. Backpropagation can improve system performance and increase transmission distance. With anticipated advances in analog-to-digital converters and integrated circuit technology, DSP-based coherent receivers at bit rates up to 100 Gb/s should become practical in the near future.
Optical communication beyond orbital angular momentum
Trichili, Abderrahmen; Rosales-Guzmán, Carmelo; Dudley, Angela; Ndagano, Bienvenu; Ben Salem, Amine; Zghal, Mourad; Forbes, Andrew
2016-01-01
Mode division multiplexing (MDM) is mooted as a technology to address future bandwidth issues, and has been successfully demonstrated in free space using spatial modes with orbital angular momentum (OAM). To further increase the data transmission rate, more degrees of freedom are required to form a densely packed mode space. Here we move beyond OAM and demonstrate multiplexing and demultiplexing using both the radial and azimuthal degrees of freedom. We achieve this with a holographic approach that allows over 100 modes to be encoded on a single hologram, across a wide wavelength range, in a wavelength independent manner. Our results offer a new tool that will prove useful in realizing higher bit rates for next generation optical networks. PMID:27283799
Design of a Multi-Level/Analog Ferroelectric Memory Device
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.
2006-01-01
Increasing the memory density and utilizing the dove1 characteristics of ferroelectric devices is important in making ferroelectric memory devices more desirable to the consumer. This paper describes a design that allows multiple levels to be stored in a ferroelectric based memory cell. It can be used to store multiple bits or analog values in a high speed nonvolatile memory. The design utilizes the hysteresis characteristic of ferroelectric transistors to store an analog value in the memory cell. The design also compensates for the decay of the polarization of the ferroelectric material over time. This is done by utilizing a pair of ferroelectric transistors to store the data. One transistor is used as a reference to determine the amount of decay that has occurred since the pair was programmed. The second transistor stores the analog value as a polarization value between zero and saturated. The design allows digital data to be stored as multiple bits in each memory cell. The number of bits per cell that can be stored will vary with the decay rate of the ferroelectric transistors and the repeatability of polarization between transistors. It is predicted that each memory cell may be able to store 8 bits or more. The design is based on data taken from actual ferroelectric transistors. Although the circuit has not been fabricated, a prototype circuit is now under construction. The design of this circuit is different than multi-level FLASH or silicon transistor circuits. The differences between these types of circuits are described in this paper. This memory design will be useful because it allows higher memory density, compensates for the environmental and ferroelectric aging processes, allows analog values to be directly stored in memory, compensates for the thermal and radiation environments associated with space operations, and relies only on existing technologies.
New PDC cutters improve drilling efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mensa-Wilmot, G.
1997-10-27
New polycrystalline diamond compact (PDC) cutters increase penetration rates and cumulative footage through improved abrasion, impact, interface strength, thermal stability, and fatigue characteristics. Studies of formation characterization, vibration analysis, hydraulic layouts, and bit selection continue to improve and expand PDC bit applications. The paper discusses development philosophy, performance characteristics and requirements, Types A, B, and C cutters, and combinations.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
2014-01-01
Background The Rapid Bioconversion with Integrated recycle Technology (RaBIT) process reduces capital costs, processing times, and biocatalyst cost for biochemical conversion of cellulosic biomass to biofuels by reducing total bioprocessing time (enzymatic hydrolysis plus fermentation) to 48 h, increasing biofuel productivity (g/L/h) twofold, and recycling biocatalysts (enzymes and microbes) to the next cycle. To achieve these results, RaBIT utilizes 24-h high cell density fermentations along with cell recycling to solve the slow/incomplete xylose fermentation issue, which is critical for lignocellulosic biofuel fermentations. Previous studies utilizing similar fermentation conditions showed a decrease in xylose consumption when recycling cells into the next fermentation cycle. Eliminating this decrease is critical for RaBIT process effectiveness for high cycle counts. Results Nine different engineered microbial strains (including Saccharomyces cerevisiae strains, Scheffersomyces (Pichia) stipitis strains, Zymomonas mobilis 8b, and Escherichia coli KO11) were tested under RaBIT platform fermentations to determine their suitability for this platform. Fermentation conditions were then optimized for S. cerevisiae GLBRCY128. Three different nutrient sources (corn steep liquor, yeast extract, and wheat germ) were evaluated to improve xylose consumption by recycled cells. Capacitance readings were used to accurately measure viable cell mass profiles over five cycles. Conclusion The results showed that not all strains are capable of effectively performing the RaBIT process. Acceptable performance is largely correlated to the specific xylose consumption rate. Corn steep liquor was found to reduce the deleterious impacts of cell recycle and improve specific xylose consumption rates. The viable cell mass profiles indicated that reduction in specific xylose consumption rate, not a drop in viable cell mass, was the main cause for decreasing xylose consumption. PMID:24847379
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
Bandwidth reduction for video-on-demand broadcasting using secondary content insertion
NASA Astrophysics Data System (ADS)
Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy
2005-01-01
An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.
1990-01-01
Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.
Cepstral domain modification of audio signals for data embedding: preliminary results
NASA Astrophysics Data System (ADS)
Gopalan, Kaliappan
2004-06-01
A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5 percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the vicinity of a formant was modified. In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible stego audio with low BER.
NASA Astrophysics Data System (ADS)
Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al
2018-03-01
One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.
An interlaboratory study of TEX86 and BIT analysis of sediments, extracts, and standard mixtures
NASA Astrophysics Data System (ADS)
Schouten, Stefan; Hopmans, Ellen C.; Rosell-Melé, Antoni; Pearson, Ann; Adam, Pierre; Bauersachs, Thorsten; Bard, Edouard; Bernasconi, Stefano M.; Bianchi, Thomas S.; Brocks, Jochen J.; Carlson, Laura Truxal; Castañeda, Isla S.; Derenne, Sylvie; Selver, Ayça. Doǧrul; Dutta, Koushik; Eglinton, Timothy; Fosse, Celine; Galy, Valier; Grice, Kliti; Hinrichs, Kai-Uwe; Huang, Yongsong; Huguet, Arnaud; Huguet, Carme; Hurley, Sarah; Ingalls, Anitra; Jia, Guodong; Keely, Brendan; Knappy, Chris; Kondo, Miyuki; Krishnan, Srinath; Lincoln, Sara; Lipp, Julius; Mangelsdorf, Kai; Martínez-García, Alfredo; Ménot, Guillemette; Mets, Anchelique; Mollenhauer, Gesine; Ohkouchi, Naohiko; Ossebaar, Jort; Pagani, Mark; Pancost, Richard D.; Pearson, Emma J.; Peterse, Francien; Reichart, Gert-Jan; Schaeffer, Philippe; Schmitt, Gaby; Schwark, Lorenz; Shah, Sunita R.; Smith, Richard W.; Smittenberg, Rienk H.; Summons, Roger E.; Takano, Yoshinori; Talbot, Helen M.; Taylor, Kyle W. R.; Tarozo, Rafael; Uchida, Masao; van Dongen, Bart E.; Van Mooy, Benjamin A. S.; Wang, Jinxiang; Warren, Courtney; Weijers, Johan W. H.; Werne, Josef P.; Woltering, Martijn; Xie, Shucheng; Yamamoto, Masanobu; Yang, Huan; Zhang, Chuanlun L.; Zhang, Yige; Zhao, Meixun; Damsté, Jaap S. Sinninghe
2013-12-01
Two commonly used proxies based on the distribution of glycerol dialkyl glycerol tetraethers (GDGTs) are the TEX86 (TetraEther indeX of 86 carbon atoms) paleothermometer for sea surface temperature reconstructions and the BIT (Branched Isoprenoid Tetraether) index for reconstructing soil organic matter input to the ocean. An initial round-robin study of two sediment extracts, in which 15 laboratories participated, showed relatively consistent TEX86 values (reproducibility ±3-4°C when translated to temperature) but a large spread in BIT measurements (reproducibility ±0.41 on a scale of 0-1). Here we report results of a second round-robin study with 35 laboratories in which three sediments, one sediment extract, and two mixtures of pure, isolated GDGTs were analyzed. The results for TEX86 and BIT index showed improvement compared to the previous round-robin study. The reproducibility, indicating interlaboratory variation, of TEX86 values ranged from 1.3 to 3.0°C when translated to temperature. These results are similar to those of other temperature proxies used in paleoceanography. Comparison of the results obtained from one of the three sediments showed that TEX86 and BIT indices are not significantly affected by interlaboratory differences in sediment extraction techniques. BIT values of the sediments and extracts were at the extremes of the index with values close to 0 or 1, and showed good reproducibility (ranging from 0.013 to 0.042). However, the measured BIT values for the two GDGT mixtures, with known molar ratios of crenarchaeol and branched GDGTs, had intermediate BIT values and showed poor reproducibility and a large overestimation of the "true" (i.e., molar-based) BIT index. The latter is likely due to, among other factors, the higher mass spectrometric response of branched GDGTs compared to crenarchaeol, which also varies among mass spectrometers. Correction for this different mass spectrometric response showed a considerable improvement in the reproducibility of BIT index measurements among laboratories, as well as a substantially improved estimation of molar-based BIT values. This suggests that standard mixtures should be used in order to obtain consistent, and molar-based, BIT values.
Design of high-speed burst mode clock and data recovery IC for passive optical network
NASA Astrophysics Data System (ADS)
Yan, Minhui; Hong, Xiaobin; Huang, Wei-Ping; Hong, Jin
2005-09-01
Design of a high bit rate burst mode clock and data recovery (BMCDR) circuit for gigabit passive optical networks (GPON) is described. A top-down design flow is established and some of the key issues related to the behavioural level modeling are addressed in consideration for the complexity of the BMCDR integrated circuit (IC). Precise implementation of Simulink behavioural model accounting for the saturation of frequency control voltage is therefore developed for the BMCDR, and the parameters of the circuit blocks can be readily adjusted and optimized based on the behavioural model. The newly designed BMCDR utilizes the 0.18um standard CMOS technology and is shown to be capable of operating at bit rate of 2.5Gbps, as well as the recovery time of one bit period in our simulation. The developed behaviour model is verified by comparing with the detailed circuit simulation.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Experimental study of entanglement evolution in the presence of bit-flip and phase-shift noises
NASA Astrophysics Data System (ADS)
Liu, Xia; Cao, Lian-Zhen; Zhao, Jia-Qiang; Yang, Yang; Lu, Huai-Xin
2017-10-01
Because of its important role both in fundamental theory and applications in quantum information, evolution of entanglement in a quantum system under decoherence has attracted wide attention in recent years. In this paper, we experimentally generate a high-fidelity maximum entangled two-qubit state and present an experimental study of the decoherence properties of entangled pair of qubits at collective (non-collective) bit-flip and phase-shift noises. The results shown that entanglement decreasing depends on the type of the noises (collective or non-collective and bit-flip or phase-shift) and the number of qubits which are subject to the noise. When two qubits are depolarized passing through non-collective noisy channel, the decay rate is larger than that depicted for the collective noise. When two qubits passing through depolarized noisy channel, the decay rate is larger than that depicted for one qubit.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamrick, Todd
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less
A Parametric Study for the Design of an Optimized Ultrasonic Percussive Planetary Drill Tool.
Li, Xuan; Harkness, Patrick; Worrall, Kevin; Timoney, Ryan; Lucas, Margaret
2017-03-01
Traditional rotary drilling for planetary rock sampling, in situ analysis, and sample return are challenging because the axial force and holding torque requirements are not necessarily compatible with lightweight spacecraft architectures in low-gravity environments. This paper seeks to optimize an ultrasonic percussive drill tool to achieve rock penetration with lower reacted force requirements, with a strategic view toward building an ultrasonic planetary core drill (UPCD) device. The UPCD is a descendant of the ultrasonic/sonic driller/corer technique. In these concepts, a transducer and horn (typically resonant at around 20 kHz) are used to excite a toroidal free mass that oscillates chaotically between the horn tip and drill base at lower frequencies (generally between 10 Hz and 1 kHz). This creates a series of stress pulses that is transferred through the drill bit to the rock surface, and while the stress at the drill-bit tip/rock interface exceeds the compressive strength of the rock, it causes fractures that result in fragmentation of the rock. This facilitates augering and downward progress. In order to ensure that the drill-bit tip delivers the greatest effective impulse (the time integral of the drill-bit tip/rock pressure curve exceeding the strength of the rock), parameters such as the spring rates and the mass of the free mass, the drill bit and transducer have been varied and compared in both computer simulation and practical experiment. The most interesting findings and those of particular relevance to deep drilling indicate that increasing the mass of the drill bit has a limited (or even positive) influence on the rate of effective impulse delivered.
Estimating Hardness from the USDC Tool-Bit Temperature Rise
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Sherrit, Stewart
2008-01-01
A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Optimization of Wireless Transceivers under Processing Energy Constraints
NASA Astrophysics Data System (ADS)
Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert
2017-09-01
Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.
NASA Technical Reports Server (NTRS)
Besser, P. J.
1977-01-01
Several versions of the 100K bit chip, which is configured as a single serial loop, were designed, fabricated and evaluated. Design and process modifications were introduced into each succeeding version to increase device performance and yield. At an intrinsic field rate of 150 KHz the final design operates from -10 C to +60 C with typical bias margins of 12 and 8 percent, respectively, for continuous operation. Asynchronous operation with first bit detection on start-up produces essentially the same margins over the temperature range. Cost projections made from fabrication yield runs on the 100K bit devices indicate that the memory element cost will be less than 10 millicents/bit in volume production.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.
Heat-assisted magnetic recording of bit-patterned media beyond 10 Tb/in2
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk
2016-03-01
The limits of areal storage density that is achievable with heat-assisted magnetic recording are unknown. We addressed this central question and investigated the areal density of bit-patterned media. We analyzed the detailed switching behavior of a recording bit under various external conditions, allowing us to compute the bit error rate of a write process (shingled and conventional) for various grain spacings, write head positions, and write temperatures. Hence, we were able to optimize the areal density yielding values beyond 10 Tb/in2. Our model is based on the Landau-Lifshitz-Bloch equation and uses hard magnetic recording grains with a 5-nm diameter and 10-nm height. It assumes a realistic distribution of the Curie temperature of the underlying material, grain size, as well as grain and head position.
Effects of drilling parameters in numerical simulation to the bone temperature elevation
NASA Astrophysics Data System (ADS)
Akhbar, Mohd Faizal Ali; Malik, Mukhtar; Yusoff, Ahmad Razlan
2018-04-01
Drilling into the bone can produce significant amount of heat which can cause bone necrosis. Understanding the drilling parameters influence to the heat generation is necessary to prevent thermal necrosis to the bone. The aim of this study is to investigate the influence of drilling parameters on bone temperature elevation. Drilling simulations of various combinations of drill bit diameter, rotational speed and feed rate were performed using finite element software DEFORM-3D. Full-factorial design of experiments (DOE) and two way analysis of variance (ANOVA) were utilised to examine the effect of drilling parameters and their interaction influence on the bone temperature. The maximum bone temperature elevation of 58% was demonstrated within the range in this study. Feed rate was found to be the main parameter to influence the bone temperature elevation during the drilling process followed by drill diameter and rotational speed. The interaction between drill bit diameter and feed rate was found to be significantly influence the bone temperature. It is discovered that the use of low rotational speed, small drill bit diameter and high feed rate are able to minimize the elevation of bone temperature for safer surgical operations.
Schwensen, J F; Menné Bonefeld, C; Zachariae, C; Agerbeck, C; Petersen, T H; Geisler, C; Bollmann, U E; Bester, K; Johansen, J D
2017-01-01
In the light of the exceptionally high rates of contact allergy to the preservative methylisothiazolinone (MI), information about cross-reactivity between MI, octylisothiazolinone (OIT) and benzisothiazolinone (BIT) is needed. To study cross-reactivity between MI and OIT, and between MI and BIT. Immune responses to MI, OIT and BIT were studied in vehicle and MI-sensitized female CBA mice by a modified local lymph node assay. The inflammatory response was measured by ear thickness, cell proliferation of CD4 + and CD8 + T cells, and CD19 + B cells in the auricular draining lymph nodes. MI induced significant, strong, concentration-dependent immune responses in the draining lymph nodes following a sensitization phase of three consecutive days. Groups of MI-sensitized mice were challenged on day 23 with 0·4% MI, 0·7% OIT and 1·9% BIT - concentrations corresponding to their individual EC3 values. No statistically significant difference in proliferation of CD4 + and CD8 + T cells was observed between mice challenged with MI compared with mice challenged with BIT and OIT. The data indicate cross-reactivity between MI, OIT and BIT, when the potency of the chemical was taken into account in choice of challenge concentration. This means that MI-sensitized individuals may react to OIT and BIT if exposed to sufficient concentrations. © 2016 British Association of Dermatologists.
Speier, William; Fried, Itzhak; Pouratian, Nader
2013-07-01
The P300 speller is a system designed to restore communication to patients with advanced neuromuscular disorders. This study was designed to explore the potential improvement from using electrocorticography (ECoG) compared to the more traditional usage of electroencephalography (EEG). We tested the P300 speller on two epilepsy patients with temporary subdural electrode arrays over the occipital and temporal lobes respectively. We then performed offline analysis to determine the accuracy and bit rate of the system and integrated spectral features into the classifier and used a natural language processing (NLP) algorithm to further improve the results. The subject with the occipital grid achieved an accuracy of 82.77% and a bit rate of 41.02, which improved to 96.31% and 49.47 respectively using a language model and spectral features. The temporal grid patient achieved an accuracy of 59.03% and a bit rate of 18.26 with an improvement to 75.81% and 27.05 respectively using a language model and spectral features. Spatial analysis of the individual electrodes showed best performance using signals generated and recorded near the occipital pole. Using ECoG and integrating language information and spectral features can improve the bit rate of a P300 speller system. This improvement is sensitive to the electrode placement and likely depends on visually evoked potentials. This study shows that there can be an improvement in BCI performance when using ECoG, but that it is sensitive to the electrode location. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet
NASA Astrophysics Data System (ADS)
Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay
1999-11-01
The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.
Percussive Augmenter of Rotary Drills (PARoD)
NASA Technical Reports Server (NTRS)
Badescu, Mircea; Hasenoehrl, Jennifer; Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Chang, Zensheu; Ostlund, Patrick; Aldrich, Jack
2013-01-01
Increasingly, NASA exploration mission objectives include sample acquisition tasks for in-situ analysis or for potential sample return to Earth. To address the requirements for samplers that could be operated at the conditions of the various bodies in the solar system, a piezoelectric actuated percussive sampling device was developed that requires low preload (as low as 10 N) which is important for operation at low gravity. This device can be made as light as 400 g, can be operated using low average power, and can drill rocks as hard as basalt. Significant improvement of the penetration rate was achieved by augmenting the hammering action by rotation and use of a fluted bit to provide effective cuttings removal. Generally, hammering is effective in fracturing drilled media while rotation of fluted bits is effective in cuttings removal. To benefit from these two actions, a novel configuration of a percussive mechanism was developed to produce an augmenter of rotary drills. The device was called Percussive Augmenter of Rotary Drills (PARoD). A breadboard PARoD was developed with a 6.4 mm (0.25 in) diameter bit and was demonstrated to increase the drilling rate of rotation alone by 1.5 to over 10 times. The test results of this configuration were published in a previous publication. Further, a larger PARoD breadboard with a 50.8 mm (2.0 in) diameter bit was developed and tested. This paper presents the design, analysis and test results of the large diameter bit percussive augmenter.
Preliminary design for a standard 10 sup 7 bit Solid State Memory (SSM)
NASA Technical Reports Server (NTRS)
Hayes, P. J.; Howle, W. M., Jr.; Stermer, R. L., Jr.
1978-01-01
A modular concept with three separate modules roughly separating bubble domain technology, control logic technology, and power supply technology was employed. These modules were respectively the standard memory module (SMM), the data control unit (DCU), and power supply module (PSM). The storage medium was provided by bubble domain chips organized into memory cells. These cells and the circuitry for parallel data access to the cells make up the SMM. The DCU provides a flexible serial data interface to the SMM. The PSM provides adequate power to enable one DCU and one SMM to operate simultaneously at the maximum data rate. The SSM was designed to handle asynchronous data rates from dc to 1.024 Mbs with a bit error rate less than 1 error in 10 to the eight power bits. Two versions of the SSM, a serial data memory and a dual parallel data memory were specified using the standard modules. The SSM specification includes requirements for radiation hardness, temperature and mechanical environments, dc magnetic field emission and susceptibility, electromagnetic compatibility, and reliability.
The selection of Lorenz laser parameters for transmission in the SMF 3rd transmission window
NASA Astrophysics Data System (ADS)
Gajda, Jerzy K.; Niesterowicz, Andrzej; Zeglinski, Grzegorz
2003-10-01
The work presents simulation of transmission line results with the fiber standard ITU-T G.652. The parameters of Lorenz laser decide about electrical signal parameters like eye pattern, jitter, BER, S/N, Q-factor, scattering diagram. For a short line lasers with linewidth larger than 100MHz can be used. In the paper cases for 10 Gbit/s and 40 Gbit/s transmission and the fiber length 30km, 50km, and 70km are calculated. The average open eye patterns were 1*10-5-120*10-5. The Q factor was 10-23dB. In calcuations the bit error rate (BER) was 10-40-10-4. If the bandwidth of Lorenz laser increases from 10 MHz to 500MHz a distance of transmission decrease from 70km to 30km. Very important for transmission distance is a rate bit of transmitter. If a bit rate increase from 10Gbit/s to 40 Gbit/s, the transmission distance for the signal mode fiber G.652 will decrease from 70km to 5km.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
High speed and adaptable error correction for megabit/s rate quantum key distribution
Dixon, A. R.; Sato, H.
2014-01-01
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416
Liu, Mao Tong; Lim, Han Chuen
2014-09-22
When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.
An ablative pulsed plasma thruster with a segmented anode
NASA Astrophysics Data System (ADS)
Zhang, Zhe; Ren, Junxue; Tang, Haibin; Ling, William Yeong Liang; York, Thomas M.
2018-01-01
An ablative pulsed plasma thruster (APPT) design with a ‘segmented anode’ is proposed in this paper. We aim to examine the effect that this asymmetric electrode configuration (a normal cathode and a segmented anode) has on the performance of an APPT. The magnetic field of the discharge arc, plasma density in the exit plume, impulse bit, and thrust efficiency were studied using a magnetic probe, Langmuir probe, thrust stand, and mass bit measurements, respectively. When compared with conventional symmetric parallel electrodes, the segmented anode APPT shows an improvement in the impulse bit of up to 28%. The thrust efficiency is also improved by 49% (from 5.3% to 7.9% for conventional and segmented designs, respectively). Long-exposure broadband emission images of the discharge morphology show that compared with a normal anode, a segmented anode results in clear differences in the luminous discharge morphology and better collimation of the plasma. The magnetic probe data indicate that the segmented anode APPT exhibits a higher current density in the discharge arc. Furthermore, Langmuir probe data collected from the central exit plane show that the peak electron density is 75% higher than with conventional parallel electrodes. These results are believed to be fundamental to the physical mechanisms behind the increased impulse bit of an APPT with a segmented electrode.
Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors
NASA Astrophysics Data System (ADS)
Freitas, P. P.
Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.
Counter-Rotating Tandem Motor Drilling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent Perry
2009-04-30
Gas Technology Institute (GTI), in partnership with Dennis Tool Company (DTC), has worked to develop an advanced drill bit system to be used with microhole drilling assemblies. One of the main objectives of this project was to utilize new and existing coiled tubing and slimhole drilling technologies to develop Microhole Technology (MHT) so as to make significant reductions in the cost of E&P down to 5000 feet in wellbores as small as 3.5 inches in diameter. This new technology was developed to work toward the DOE's goal of enabling domestic shallow oil and gas wells to be drilled inexpensively comparedmore » to wells drilled utilizing conventional drilling practices. Overall drilling costs can be lowered by drilling a well as quickly as possible. For this reason, a high drilling rate of penetration is always desired. In general, high drilling rates of penetration (ROP) can be achieved by increasing the weight on bit and increasing the rotary speed of the bit. As the weight on bit is increased, the cutting inserts penetrate deeper into the rock, resulting in a deeper depth of cut. As the depth of cut increases, the amount of torque required to turn the bit also increases. The Counter-Rotating Tandem Motor Drilling System (CRTMDS) was planned to achieve high rate of penetration (ROP) resulting in the reduction of the drilling cost. The system includes two counter-rotating cutter systems to reduce or eliminate the reactive torque the drillpipe or coiled tubing must resist. This would allow the application of maximum weight-on-bit and rotational velocities that a coiled tubing drilling unit is capable of delivering. Several variations of the CRTDMS were designed, manufactured and tested. The original tests failed leading to design modifications. Two versions of the modified system were tested and showed that the concept is both positive and practical; however, the tests showed that for the system to be robust and durable, borehole diameter should be substantially larger than that of slim holes. As a result, the research team decided to complete the project, document the tested designs and seek further support for the concept outside of the DOE.« less
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
Adaptive bit plane quadtree-based block truncation coding for image compression
NASA Astrophysics Data System (ADS)
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
LDPC product coding scheme with extrinsic information for bit patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
A high-speed digital signal processor for atmospheric radar, part 7.3A
NASA Technical Reports Server (NTRS)
Brosnahan, J. W.; Woodard, D. M.
1984-01-01
The Model SP-320 device is a monolithic realization of a complex general purpose signal processor, incorporating such features as a 32-bit ALU, a 16-bit x 16-bit combinatorial multiplier, and a 16-bit barrel shifter. The SP-320 is designed to operate as a slave processor to a host general purpose computer in applications such as coherent integration of a radar return signal in multiple ranges, or dedicated FFT processing. Presently available is an I/O module conforming to the Intel Multichannel interface standard; other I/O modules will be designed to meet specific user requirements. The main processor board includes input and output FIFO (First In First Out) memories, both with depths of 4096 W, to permit asynchronous operation between the source of data and the host computer. This design permits burst data rates in excess of 5 MW/s.
Optical transmission modules for multi-channel superconducting quantum interference device readouts.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2013-12-01
We developed an optical transmission module consisting of 16-channel analog-to-digital converter (ADC), digital-noise filter, and one-line serial transmitter, which transferred Superconducting Quantum Interference Device (SQUID) readout data to a computer by a single optical cable. A 16-channel ADC sent out SQUID readouts data with 32-bit serial data of 8-bit channel and 24-bit voltage data at a sample rate of 1.5 kSample/s. A digital-noise filter suppressed digital noises generated by digital clocks to obtain SQUID modulation as large as possible. One-line serial transmitter reformed 32-bit serial data to the modulated data that contained data and clock, and sent them through a single optical cable. When the optical transmission modules were applied to 152-channel SQUID magnetoencephalography system, this system maintained a field noise level of 3 fT/√Hz @ 100 Hz.
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Imoto, Nobuyuki
2002-03-01
This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.
NASA Astrophysics Data System (ADS)
Ferrandiz, Ana; Scallan, Gavin
1995-10-01
The available bit rate (ABR) service allows connections to exceed their negotiated data rates during the life of the connections when excess capacity is available in the network. These connections are subject to flow control from the network in the event of network congestion. The ability to dynamically adjust the data rate of the connection can provide improved utilization of the network and be a valuable service to end users. ABR type service is therefore appropriate for the transmission of bursty LAN traffic over a wide area network in a manner that is more efficient and cost effective than allocating bandwdith at the peak cell rate. This paper describes the ABR service and discusses if it is realistic to operate a LAN like service over a wide area using ABR.
Real-time motion-based H.263+ frame rate control
NASA Astrophysics Data System (ADS)
Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay
1998-12-01
Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Keller, P. E.; Gmitro, A. F.
1993-07-01
A prototype neutral network system of multifaceted, planar interconnection holograms and opto-electronic neurons is analyzed. This analysis shows that a hologram fabricated with electron-beam lithography has the capacity to connect 6700 neuron outputs to 6700 neuron inputs, and that, the encoded synaptic weights have a precision of approximately 5 bits. Higher interconnection densities can be achieved by accepting a lower synaptic weight accuracy. For systems employing laser diodes at the outputs of the neurons, processing rates in the range of 45 to 720 trillion connections per second can potentially be achieved.
Finite-key analysis for the 1-decoy state QKD protocol
NASA Astrophysics Data System (ADS)
Rusca, Davide; Boaron, Alberto; Grünenfelder, Fadri; Martin, Anthony; Zbinden, Hugo
2018-04-01
It has been shown that in the asymptotic case of infinite-key length, the 2-decoy state Quantum Key Distribution (QKD) protocol outperforms the 1-decoy state protocol. Here, we present a finite-key analysis of the 1-decoy method. Interestingly, we find that for practical block sizes of up to 108 bits, the 1-decoy protocol achieves for almost all experimental settings higher secret key rates than the 2-decoy protocol. Since using only one decoy is also easier to implement, we conclude that it is the best choice for QKD, in most common practical scenarios.
Communication using VCSEL laser array
NASA Technical Reports Server (NTRS)
Goorjian, Peter M. (Inventor)
2008-01-01
Ultrafast directional beam switching, using coupled vertical cavity surface emitting lasers (VCSELs) is combined with a light modulator to provide information transfer at bit rates of tens of GHz. This approach is demonstrated to achieve beam switching frequencies of 32-50 GHz in some embodiments and directional beam switching with angular differences of about eight degrees. This switching scheme is likely to be useful for ultrafast optical networks at frequencies much higher than achievable with other approaches. A Mach-Zehnder interferometer, a Fabry-Perot etalon, or a semiconductor-based electro-absorption transmission channel, among others, can be used as a light modulator.
SEMICONDUCTOR INTEGRATED CIRCUITS A 10-bit 200-kS/s SAR ADC IP core for a touch screen SoC
NASA Astrophysics Data System (ADS)
Xingyuan, Tong; Yintang, Yang; Zhangming, Zhu; Wenfang, Sheng
2010-10-01
Based on a 5 MSBs (most-significant-bits)-plus-5 LSBs (least-significant-bits) C-R hybrid D/A conversion and low-offset pseudo-differential comparison approach, with capacitor array axially symmetric layout topology and resistor string low gradient mismatch placement method, an 8-channel 10-bit 200-kS/s SAR ADC (successive-approximation-register analog-to-digital converter) IP core for a touch screen SoC (system-on-chip) is implemented in a 0.18 μm 1P5M CMOS logic process. Design considerations for the touch screen SAR ADC are included. With a 1.8 V power supply, the DNL (differential non-linearity) and INL (integral non-linearity) of this converter are measured to be about 0.32 LSB and 0.81 LSB respectively. With an input frequency of 91 kHz at 200-kS/s sampling rate, the spurious-free dynamic range and effective-number-of-bits are measured to be 63.2 dB and 9.15 bits respectively, and the power is about 136 μW. This converter occupies an area of about 0.08 mm2. The design results show that it is very suitable for touch screen SoC applications.
Line-of-Sight Data Link Test Set
1976-06-01
spheric layer model for layer refraction or a surface reflectivity model for ground reflection paths. Measurement of the channel impulse response...the model is exercised over a path consisting of only a constant direct component. The test would consist of measuring the modem demodulator bit...direct and a fading direct component. The test typically would consist of measuring the bit error-rate over a range of average signal-to-noise
The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results
NASA Technical Reports Server (NTRS)
Katz, J.
1982-01-01
The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.
Classical and quantum communication without a shared reference frame.
Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W
2003-07-11
We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.
Bit error rate performance of Image Processing Facility high density tape recorders
NASA Technical Reports Server (NTRS)
Heffner, P.
1981-01-01
The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.
Bit error rate tester using fast parallel generation of linear recurring sequences
Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.
2003-05-06
A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.
NASA Technical Reports Server (NTRS)
Laeser, R. P.; Textor, G. P.; Kelly, L. B.; Kelly, M.
1972-01-01
The DSN command system provided the capability to enter commands in a computer at the deep space stations for transmission to the spacecraft. The high-rate telemetry system operated at 16,200 bits/sec. This system will permit return to DSS 14 of full-resolution television pictures from the spacecraft tape recorder, plus the other science experiment data, during the two playback periods of each Goldstone pass planned for each corresponding orbit. Other features included 4800 bits/sec modem high-speed data lines from all deep space stations to Space Flight Operations Facility (SFOF) and the Goddard Space Flight Center, as well as 50,000 bits/sec wideband data lines from DSS 14 to the SFOF, thus providing the capability for data flow of two 16,200 bits/sec high-rate telemetry data streams in real time. The TDS performed prelaunch training and testing and provided support for the Mariner Mars 1971/Mission Operations System training and testing. The facilities of the ETR, DSS 71, and stations of the MSFN provided flight support coverage at launch and during the near-earth phase. The DSSs 12, 14, 41, and 51 of the DSN provided the deep space phase support from 30 May 1971 through 4 June 1971.
Methods to ensure optimal off-bottom and drill bit distance under pellet impact drilling
NASA Astrophysics Data System (ADS)
Kovalyov, A. V.; Isaev, Ye D.; Vagapov, A. R.; Urnish, V. V.; Ulyanova, O. S.
2016-09-01
The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rock for various purposes. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The paper presents the survey of methods ensuring an optimal off-bottom and a drill bit distance. The analysis of methods shows that the issue is topical and requires further research.
Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju
2017-08-21
Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
NASA Astrophysics Data System (ADS)
Chau, H. F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.
2 GHz clock quantum key distribution over 260 km of standard telecom fiber.
Wang, Shuang; Chen, Wei; Guo, Jun-Fu; Yin, Zhen-Qiang; Li, Hong-Wei; Zhou, Zheng; Guo, Guang-Can; Han, Zheng-Fu
2012-03-15
We report a demonstration of quantum key distribution (QKD) over a standard telecom fiber exceeding 50 dB in loss and 250 km in length. The differential phase shift QKD protocol was chosen and implemented with a 2 GHz system clock rate. By careful optimization of the 1 bit delayed Faraday-Michelson interferometer and the use of the superconducting single photon detector (SSPD), we achieved a quantum bit error rate below 2% when the fiber length was no more than 205 km, and of 3.45% for a 260 km fiber with 52.9 dB loss. We also improved the quantum efficiency of SSPD to obtain a high key rate for 50 km length.
NASA Astrophysics Data System (ADS)
Lazarev, Grigory; Bonifer, Stefanie; Engel, Philip; Höhne, Daniel; Notni, Gunther
2017-06-01
We report about the implementation of the liquid crystal on silicon (LCOS) microdisplay with 1920 by 1080 resolution and 720 Hz frame rate. The driving solution is FPGA-based. The input signal is converted from the ultrahigh-resolution HDMI 2.0 signal into HD frames, which follow with the specified 720 Hz frame rate. Alternatively the signal is generated directly on the FPGA with built-in pattern generator. The display is showing switching times below 1.5 ms for the selected working temperature. The bit depth of the addressed image achieves 8 bit within each frame. The microdisplay is used in the fringe projection-based 3D sensing system, implemented by Fraunhofer IOF.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Ultralow-Power Digital Correlator for Microwave Polarimetry
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Hass, K. Joseph
2004-01-01
A recently developed high-speed digital correlator is especially well suited for processing readings of a passive microwave polarimeter. This circuit computes the autocorrelations of, and the cross-correlations among, data in four digital input streams representing samples of in-phase (I) and quadrature (Q) components of two intermediate-frequency (IF) signals, denoted A and B, that are generated in heterodyne reception of two microwave signals. The IF signals arriving at the correlator input terminals have been digitized to three levels (-1,0,1) at a sampling rate up to 500 MHz. Two bits (representing sign and magnitude) are needed to represent the instantaneous datum in each input channel; hence, eight bits are needed to represent the four input signals during any given cycle of the sampling clock. The accumulation (integration) time for the correlation is programmable in increments of 2(exp 8) cycles of the sampling clock, up to a maximum of 2(exp 24) cycles. The basic functionality of the correlator is embodied in 16 correlation slices, each of which contains identical logic circuits and counters (see figure). The first stage of each correlation slice is a logic gate that computes one of the desired correlations (for example, the autocorrelation of the I component of A or the negative of the cross-correlation of the I component of A and the Q component of B). The sampling of the output of the logic gate output is controlled by the sampling-clock signal, and an 8-bit counter increments in every clock cycle when the logic gate generates output. The most significant bit of the 8-bit counter is sampled by a 16-bit counter with a clock signal at 2(exp 8) the frequency of the sampling clock. The 16-bit counter is incremented every time the 8-bit counter rolls over.
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Selectively Encrypted Pull-Up Based Watermarking of Biometric data
NASA Astrophysics Data System (ADS)
Shinde, S. A.; Patel, Kushal S.
2012-10-01
Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.
Automatic speech recognition research at NASA-Ames Research Center
NASA Technical Reports Server (NTRS)
Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.
1977-01-01
A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
NASA Astrophysics Data System (ADS)
Shinmoto, Y.; Wada, K.; Miyazaki, E.; Sanada, Y.; Sawada, I.; Yamao, M.
2010-12-01
The Nankai-Trough Seismogenic Zone Experiment (NanTroSEIZE) has carried out several drilling expeditions in the Kumano Basin off the Kii-Peninsula of Japan with the deep-sea scientific drilling vessel Chikyu. Core sampling runs were carried out during the expeditions using an advanced multiple wireline coring system which can continuously core into sections of undersea formations. The core recovery rate with the Rotary Core Barrel (RCB) system was rather low as compared with other methods such as the Hydraulic Piston Coring System (HPCS) and Extended Shoe Coring System (ESCS). Drilling conditions such as hole collapse and sea conditions such as high ship-heave motions need to be analyzed along with differences in lithology, formation hardness, water depth and coring depth in order to develop coring tools, such as the core barrel or core bit, that will yield the highest core recovery and quality. The core bit is especially important in good recovery of high quality cores, however, the PDC cutters were severely damaged during the NanTroSEIZE Stages 1 & 2 expeditions due to severe drilling conditions. In the Stage 1 (riserless coring) the average core recovery was rather low at 38 % with the RCB and many difficulties such as borehole collapse, stick-slip and stuck pipe occurred, causing the damage of several of the PDC cutters. In Stage 2, a new design for the core bit was deployed and core recovery was improved at 67 % for the riserless system and 85 % with the riser. However, due to harsh drilling conditions, the PDC core bit and all of the PDC cutters were completely worn down. Another original core bit was also deployed, however, core recovery performance was low even for plate boundary core samples. This study aims to identify the influence of the RCB system specifically on the recovery rates at each of the holes drilled in the NanTroSEIZE coring expeditions. The drilling parameters such as weight-on-bit, torque, rotary speed and flow rate, etc., were analyzed and conditions such as formation, tools, and sea conditions which directly affect core recovery have been categorized. Also discussed will be the further development of such coring equipment as the core bit and core barrel for the NanTroSEIZE Stage 3 expeditions, which aim to reach a depth of 7000 m-below the sea floor into harder formations under extreme drilling conditions.
NASA Astrophysics Data System (ADS)
Ujiie, K.; Inoue, T.; Ishiwata, J.
2015-12-01
Frictional strength at seismic slip rates is a key to evaluate fault weakening and rupture propagation during earthquakes. The Japan Trench First Drilling Project (JFAST) drilled through the shallow plate-boundary thrust, where huge displacements of ~50 m occurred during the 2011 Tohoku-Oki earthquake. To determine the downhole frictional strength at drilled site (Site C0019), we analyzed surface drilling data. The equivalent slip rate estimated from the rotation rate and inner and outer radiuses of the drill bit ranges from 0.8 to 1.3 m/s. The measured torque includes the frictional torque between the drilling string and borehole wall, the viscous torque between the drilling string and seawater/drilling fluid, and the drilling torque between the drill bit and sediments. We subtracted the former two from the measured torque using the torque data during bottom-up rotating operations at several depths. Then, the shear stress was calculated from the drilling torque taking the configuration of the drill bit into consideration. The normal stress was estimated from the weight on bit data and the projected area of the drill bit. Assuming negligible cohesion, the frictional strength was obtained by dividing shear stress by normal stress. The results show a clear contrast in high-velocity frictional strength across the plate-boundary thrust: the friction coefficient of frontal prism sediments (hemipelagic mudstones) in hanging wall is 0.1-0.2, while that in subducting sediments (hemipelagic to pelagic mudstones and chert) in footwall increases to 0.2-0.4. The friction coefficient of smectite-rich pelagic clay in the plate-boundary thrust is ~0.1, which is consistent with that obtained from high-velocity (1.3 m/s) friction experiments and temperature measurements. We conclude that surface drilling torque provides useful data to obtain a continuous downhole frictional strength.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
Fabrication of Fe-Based Diamond Composites by Pressureless Infiltration
Li, Meng; Sun, Youhong; Meng, Qingnan; Wu, Haidong; Gao, Ke; Liu, Baochang
2016-01-01
A metal-based matrix is usually used for the fabrication of diamond bits in order to achieve favorable properties and easy processing. In the effort to reduce the cost and to attain the desired bit properties, researchers have brought more attention to diamond composites. In this paper, Fe-based impregnated diamond composites for drill bits were fabricated by using a pressureless infiltration sintering method at 970 °C for 5 min. In addition, boron was introduced into Fe-based diamond composites. The influence of boron on the density, hardness, bending strength, grinding ratio, and microstructure was investigated. An Fe-based diamond composite with 1 wt % B has an optimal overall performance, the grinding ratio especially improving by 80%. After comparing with tungsten carbide (WC)-based diamond composites with and without 1 wt % B, results showed that the Fe-based diamond composite with 1 wt % B exhibits higher bending strength and wear resistance, being satisfactory to bit needs. PMID:28774124
Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time
NASA Technical Reports Server (NTRS)
Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John
2008-01-01
Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
NASA Technical Reports Server (NTRS)
Brand, J.
1972-01-01
The fabrication, test, and delivery of an optical modulator system which will operate with a mode-locked Nd:YAG laser indicating at either 1.06 or 0.53 micrometers is discussed. The delivered hardware operates at data rates up to 400 Mbps and includes a 0.53 micrometer electrooptic modulator, a 1.06 micrometer electrooptic modulator with power supply and signal processing electronics with power supply. The modulators contain solid state drivers which accept digital signals with MECL logic levels, temperature controllers to maintain a stable thermal environment for the modulator crystals, and automatic electronic compensation to maximize the extinction ratio. The modulators use two lithium tantalate crystals cascaded in a double pass configuration. The signal processing electronics include encoding electronics which are capable of digitizing analog signals between the limit of + or - 0.75 volts at a maximum rate of 80 megasamples per second with 5 bit resolution. The digital samples are serialized and made available as a 400 Mbps serial NRZ data source for the modulators. A pseudorandom (PN) generator is also included in the signal processing electronics. This data source generates PN sequences with lengths between 31 bits and 32,767 bits in a serial NRZ format at rates up to 400 Mbps.
Sequenced subjective accents for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.
2011-06-01
Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.
Two-dimensional optoelectronic interconnect-processor and its operational bit error rate
NASA Astrophysics Data System (ADS)
Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.
2004-10-01
Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.
Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W
2016-11-14
Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
NASA Astrophysics Data System (ADS)
Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.
2011-06-01
In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.
Subjective evaluation of HEVC in mobile devices
NASA Astrophysics Data System (ADS)
Garcia, Ray; Kalva, Hari
2013-03-01
Mobile compute environments provide a unique set of user needs and expectations that designers must consider. With increased multimedia use in mobile environments, video encoding methods within the smart phone market segment are key factors that contribute to positive user experience. Currently available display resolutions and expected cellular bandwidth are major factors the designer must consider when determining which encoding methods should be supported. The desired goal is to maximize the consumer experience, reduce cost, and reduce time to market. This paper presents a comparative evaluation of the quality of user experience when HEVC and AVC/H.264 video coding standards were used. The goal of the study was to evaluate any improvements in user experience when using HEVC. Subjective comparisons were made between H.264/AVC and HEVC encoding standards in accordance with Doublestimulus impairment scale (DSIS) as defined by ITU-R BT.500-13. Test environments are based on smart phone LCD resolutions and expected cellular bit rates, such as 200kbps and 400kbps. Subjective feedback shows both encoding methods are adequate at 400kbps constant bit rate. However, a noticeable consumer experience gap was observed for 200 kbps. Significantly less H.264 subjective quality is noticed with video sequences that have multiple objects moving and no single point of visual attraction. Video sequences with single points of visual attraction or few moving objects tended to have higher H.264 subjective quality.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
NASA Astrophysics Data System (ADS)
Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero
2016-10-01
In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnis Judzis; Alan Black; Homer Robertson
2006-03-01
The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ultra-high rotary speed drilling system is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 October 2004 through 30 September 2005. Additionally, research activity from 1 October 2005 through 28 February 2006 is included in this report: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek designed and planned Phase I bench scale experiments. Some difficulties continue in obtaining ultra-high speed motors. Improvements have been made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs have been provided to vendors for production. A more consistent product is required to minimize the differences in bit performance. A test matrix for the final core bit testing program has been completed. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. (4) Significant testing has been performed on nine different rocks. (5) Bit balling has been observed on some rock and seems to be more pronounces at higher rotational speeds. (6) Preliminary analysis of data has been completed and indicates that decreased specific energy is required as the rotational speed increases (Task 4). This data analysis has been used to direct the efforts of the final testing for Phase I (Task 5). (7) Technology transfer (Task 6) has begun with technical presentations to the industry (see Judzis).« less
Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification
NASA Astrophysics Data System (ADS)
Fallahpour, Mehdi; Megías, David
This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.
Ultrasonic/Sonic Rotary-Hammer Drills
NASA Technical Reports Server (NTRS)
Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph; Bao, Xiaoqi; Kassab, Steve
2010-01-01
Ultrasonic/sonic rotary-hammer drill (USRoHD) is a recent addition to the collection of apparatuses based on ultrasonic/sonic drill corer (USDC). As described below, the USRoHD has several features, not present in a basic USDC, that increase efficiency and provide some redundancy against partial failure. USDCs and related apparatuses were conceived for boring into, and/or acquiring samples of, rock or other hard, brittle materials of geological interest. They have been described in numerous previous NASA Tech Briefs articles. To recapitulate: A USDC can be characterized as a lightweight, lowpower, piezoelectrically driven jackhammer in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. A basic USDC includes a piezoelectric stack, an ultrasonic transducer horn connected to the stack, a free mass ( free in the sense that it can bounce axially a short distance between hard stops on the horn and the bit), and a tool bit. The piezoelectric stack creates ultrasonic vibrations that are mechanically amplified by the horn. The bouncing of the free mass between the hard stops generates the sonic vibrations. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that unlike in conventional twist drilling, little applied axial force is needed to make the apparatus advance into the material of interest. There are numerous potential applications for USDCs and related apparatuses in geological exploration on Earth and on remote planets. In early USDC experiments, it was observed that accumulation of cuttings in a drilled hole causes the rate of penetration of the USDC to decrease steeply with depth, and that the rate of penetration can be increased by removing the cuttings. The USRoHD concept provides for removal of cuttings in the same manner as that of a twist drill: An USRoHD includes a USDC and a motor with gearhead (see figure). The USDC provides the bit hammering and the motor provides the bit rotation. Like a twist drill bit, the shank of the tool bit of the USRoHD is fluted. As in the operation of a twist drill, the rotation of the fluted drill bit removes cuttings from the drilled hole. The USRoHD tool bit is tipped with a replaceable crown having cutting teeth on its front surface. The teeth are shaped to promote fracturing of the rock face through a combination of hammering and rotation of the tool bit. Helical channels on the outer cylindrical surface of the crown serve as a continuation of the fluted surface of the shank, helping to remove cuttings. In the event of a failure of the USDC, the USRoHD can continue to operate with reduced efficiency as a twist drill. Similarly, in the event of a failure of the gearmotor, the USRoHD can continue to operate with reduced efficiency as a USDC.
Ka-Band Phased Array System Characterization
NASA Technical Reports Server (NTRS)
Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.
2001-01-01
Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.
Lu, Hai-Han; Li, Chung-Yi; Chu, Chien-An; Lu, Ting-Chien; Chen, Bo-Rui; Wu, Chang-Jen; Lin, Dai-Hua
2015-10-01
A 10 m/25 Gbps light-based WiFi (LiFi) transmission system based on a two-stage injection-locked 680 nm vertical-cavity surface-emitting laser (VCSEL) transmitter is proposed. A LiFi transmission system with a data rate of 25 Gbps is experimentally demonstrated over a 10 m free-space link. To the best of our knowledge, it is the first time a two-stage injection-locked 680 nm VCSEL transmitter in a 10 m/25 Gbps LiFi transmission system has been employed. Impressive bit error rate performance and a clear eye diagram are achieved in the proposed systems. Such a 10 m/25 Gbps LiFi transmission system provides the advantage of a communication link for higher data rates that could accelerate the deployment of visible laser light communication.
MobileASL: intelligibility of sign language video over mobile phones.
Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A
2008-01-01
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
True random numbers from amplified quantum vacuum.
Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V
2011-10-10
Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.
A Wearable Healthcare System With a 13.7 μA Noise Tolerant ECG Processor.
Izumi, Shintaro; Yamashita, Ken; Nakano, Masanao; Kawaguchi, Hiroshi; Kimura, Hiromitsu; Marumoto, Kyoji; Fuchikami, Takaaki; Fujimori, Yoshikazu; Nakajima, Hiroshi; Shiga, Toshikazu; Yoshimoto, Masahiko
2015-10-01
To prevent lifestyle diseases, wearable bio-signal monitoring systems for daily life monitoring have attracted attention. Wearable systems have strict size and weight constraints, which impose significant limitations of the battery capacity and the signal-to-noise ratio of bio-signals. This report describes an electrocardiograph (ECG) processor for use with a wearable healthcare system. It comprises an analog front end, a 12-bit ADC, a robust Instantaneous Heart Rate (IHR) monitor, a 32-bit Cortex-M0 core, and 64 Kbyte Ferroelectric Random Access Memory (FeRAM). The IHR monitor uses a short-term autocorrelation (STAC) algorithm to improve the heart-rate detection accuracy despite its use in noisy conditions. The ECG processor chip consumes 13.7 μA for heart rate logging application.
Compensation for first-order polarization-mode dispersion by using a novel tunable compensator
NASA Astrophysics Data System (ADS)
Qiu, Feng; Ning, Tigang; Pei, Shanshan; Xing, Yujun; Jian, Shuisheng
2005-01-01
Polarization-related impairments have become a critical issue for high-data-rate optical systems, particularly when considering polarization-mode dispersion (PMD). Consequently, compensation of PMD, especially for the first-order PMD is necessary to maintain adequate performance in long-haul systems at a high bit rate of 10 Gb/s or beyond. In this paper, we successfully demonstrated automatic and tunable compensation for first-order polarization-mode dispersion. Furthermore, we reported the statistical assessment of this tunable compensator at 10 Gbit/s. Experimental results, including bit error rate measurements, are successfully compared with theory, therefore demonstrating the compensator efficiency at 10 Gbit/s. The first-order PMD was max 274 ps before PMD compensation, and it was lower than 7ps after PMD compensation.
Video on phone lines: technology and applications
NASA Astrophysics Data System (ADS)
Hsing, T. Russell
1996-03-01
Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.
Achieving the Holevo bound via a bisection decoding protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosati, Matteo; Giovannetti, Vittorio
2016-06-15
We present a new decoding protocol to realize transmission of classical information through a quantum channel at asymptotically maximum capacity, achieving the Holevo bound and thus the optimal communication rate. At variance with previous proposals, our scheme recovers the message bit by bit, making use of a series of “yes-no” measurements, organized in bisection fashion, thus determining which codeword was sent in log{sub 2} N steps, N being the number of codewords.
Practical quantum key distribution protocol without monitoring signal disturbance.
Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato
2014-05-22
Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.
Kiani, Mehdi; Ghovanloo, Maysam
2015-02-01
A fully-integrated near-field wireless transceiver has been presented for simultaneous data and power transmission across inductive links, which operates based on pulse delay modulation (PDM) technique. PDM is a low-power carrier-less modulation scheme that offers wide bandwidth along with robustness against strong power carrier interference, which makes it suitable for implantable neuroprosthetic devices, such as retinal implants. To transmit each bit, a pattern of narrow pulses are generated at the same frequency of the power carrier across the transmitter (Tx) data coil with specific time delays to initiate decaying ringing across the tuned receiver (Rx) data coil. This ringing shifts the zero-crossing times of the undesired power carrier interference on the Rx data coil, resulting in a phase shift between the signals across Rx power and data coils, from which the data bit stream can be recovered. A PDM transceiver prototype was fabricated in a 0.35- μm standard CMOS process, occupying 1.6 mm(2). The transceiver achieved a measured 13.56 Mbps data rate with a raw bit error rate (BER) of 4.3×10(-7) at 10 mm distance between figure-8 data coils, despite a signal-to-interference ratio (SIR) of -18.5 dB across the Rx data coil. At the same time, a class-D power amplifier, operating at 13.56 MHz, delivered 42 mW of regulated power across a separate pair of high-Q power coils, aligned with the data coils. The PDM data Tx and Rx power consumptions were 960 pJ/bit and 162 pJ/bit, respectively, at 1.8 V supply voltage.
Percussive Augmenter of Rotary Drills (PARoD)
NASA Technical Reports Server (NTRS)
Badescu, Mircea; Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Chang, Zensheu; Donnelly, Chris; Aldrich, Jack
2012-01-01
Increasingly, NASA exploration mission objectives include sample acquisition tasks for in-situ analysis or for potential sample return to Earth. To address the requirements for samplers that could be operated at the conditions of the various bodies in the solar system, a piezoelectric actuated percussive sampling device was developed that requires low preload (as low as 10N) which is important for operation at low gravity. This device can be made as light as 400g, can be operated using low average power, and can drill rocks as hard as basalt. Significant improvement of the penetration rate was achieved by augmenting the hammering action by rotation and use of a fluted bit to provide effective cuttings removal. Generally, hammering is effective in fracturing drilled media while rotation of fluted bits is effective in cuttings removal. To benefit from these two actions, a novel configuration of a percussive mechanism was developed to produce an augmenter of rotary drills. The device was called Percussive Augmenter of Rotary Drills (PARoD). A breadboard PARoD was developed with a 6.4 mm (0.25 in) diameter bit and was demonstrated to increase the drilling rate of rotation alone by 1.5 to over 10 times. Further, a large PARoD breadboard with 50.8 mm diameter bit was developed and its tests are currently underway. This paper presents the design, analysis and preliminary test results of the percussive augmenter.
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
An optical disk archive for a data base management system
NASA Technical Reports Server (NTRS)
Thomas, Douglas T.
1985-01-01
An overview is given of a data base management system that can catalog and archive data at rates up to 50M bits/sec. Emphasis is on the laser disk system that is used for the archive. All key components in the system (3 Vax 11/780s, a SEL 32/2750, a high speed communication interface, and the optical disk) are interfaced to a 100M bits/sec 16-port fiber optic bus to achieve the high data rates. The basic data unit is an autonomous data packet. Each packet contains a primary and secondary header and can be up to a million bits in length. The data packets are recorded on the optical disk at the same time the packet headers are being used by the relational data base management software ORACLE to create a directory independent of the packet recording process. The user then interfaces to the VAX that contains the directory for a quick-look scan or retrieval of the packet(s). The total system functions are distributed between the VAX and the SEL. The optical disk unit records the data with an argon laser at 100M bits/sec from its buffer, which is interfaced to the fiber optic bus. The same laser is used in the read cycle by reducing the laser power. Additional information is given in the form of outlines, charts, and diagrams.
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
Hoenig, Helen M; Amis, Kristopher; Edmonds, Carol; Morgan, Michelle S; Landerman, Lawrence; Caves, Kevin
2017-01-01
Background There is limited research about the effects of video quality on the accuracy of assessments of physical function. Methods A repeated measures study design was used to assess reliability and validity of the finger-nose test (FNT) and the finger-tapping test (FTT) carried out with 50 veterans who had impairment in gross and/or fine motor coordination. Videos were scored by expert raters under eight differing conditions, including in-person, high definition video with slow motion review and standard speed videos with varying bit rates and frame rates. Results FTT inter-rater reliability was excellent with slow motion video (ICC 0.98-0.99) and good (ICC 0.59) under the normal speed conditions. Inter-rater reliability for FNT 'attempts' was excellent (ICC 0.97-0.99) for all viewing conditions; for FNT 'misses' it was good to excellent (ICC 0.89) with slow motion review but substantially worse (ICC 0.44) on the normal speed videos. FTT criterion validity (i.e. compared to slow motion review) was excellent (β = 0.94) for the in-person rater and good ( β = 0.77) on normal speed videos. Criterion validity for FNT 'attempts' was excellent under all conditions ( r ≥ 0.97) and for FNT 'misses' it was good to excellent under all conditions ( β = 0.61-0.81). Conclusions In general, the inter-rater reliability and validity of the FNT and FTT assessed via video technology is similar to standard clinical practices, but is enhanced with slow motion review and/or higher bit rate.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
NASA Astrophysics Data System (ADS)
Guesmi, Latifa; Menif, Mourad
2016-08-01
In the context of carrying a wide variety of modulation formats and data rates for home networks, the study covers the radio-over-fiber (RoF) technology, where the need for an alternative way of management, automated fault diagnosis, and formats identification is expressed. Also, RoF signals in an optical link are impaired by various linear and nonlinear effects including chromatic dispersion, polarization mode dispersion, amplified spontaneous emission noise, and so on. Hence, for this purpose, we investigated the sampling method based on asynchronous delay-tap sampling in conjunction with a cross-correlation function for the joint bit rate/modulation format identification and optical performance monitoring. Three modulation formats with different data rates are used to demonstrate the validity of this technique, where the identification accuracy and the monitoring ranges reached high values.
Louri, A; Furlonge, S; Neocleous, C
1996-12-10
A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.
Investigation of mode partition noise in Fabry-Perot laser diode
NASA Astrophysics Data System (ADS)
Guo, Qingyi; Deng, Lanxin; Mu, Jianwei; Li, Xun; Huang, Wei-Ping
2014-09-01
Passive optical network (PON) is considered as the most appealing access network architecture in terms of cost-effectiveness, bandwidth management flexibility, scalability and durability. And to further reduce the cost per subscriber, a Fabry-Perot (FP) laser diode is preferred as the transmitter at the optical network units (ONUs) because of its lower cost compared to distributed feedback (DFB) laser diode. However, the mode partition noise (MPN) associated with the multi-longitudinal-mode FP laser diode becomes the limiting factor in the network. This paper studies the MPN characteristics of the FP laser diode using the time-domain simulation of noise-driven multi-mode laser rate equation. The probability density functions are calculated for each longitudinal mode. The paper focuses on the investigation of the k-factor, which is a simple yet important measure of the noise power, but is usually taken as a fitted or assumed value in the penalty calculations. In this paper, the sources of the k-factor are studied with simulation, including the intrinsic source of the laser Langevin noise, and the extrinsic source of the bit pattern. The photon waveforms are shown under four simulation conditions for regular or random bit pattern, and with or without Langevin noise. The k-factors contributed by those sources are studied with a variety of bias current and modulation current. Simulation results are illustrated in figures, and show that the contribution of Langevin noise to the k-factor is larger than that of the random bit pattern, and is more dominant at lower bias current or higher modulation current.
A fully integrated mixed-signal neural processor for implantable multichannel cortical recording.
Sodagar, Amir M; Wise, Kensall D; Najafi, Khalil
2007-06-01
A 64-channel neural processor has been developed for use in an implantable neural recording microsystem. In the Scan Mode, the processor is capable of detecting neural spikes by programmable positive, negative, or window thresholding. Spikes are tagged with their associated channel addresses and formed into 18-bit data words that are sent serially to the external host. In the Monitor Mode, two channels can be selected and viewed at high resolution for studies where the entire signal is of interest. The processor runs from a 3-V supply and a 2-MHz clock, with a channel scan rate of 64 kS/s and an output bit rate of 2 Mbps.
NASA Astrophysics Data System (ADS)
Almalaq, Yasser; Matin, Mohammad A.
2014-09-01
The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.
Drilling plastic formations using highly polished PDC cutters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.H.; Lund, J.B.; Anderson, M.
1995-12-31
Highly plastic and over-pressured formations are troublesome for both roller cone and PDC bits. Thus far, attempts to increase penetration rates in these formations have centered around re-designing the bit or modifying the cutting structure. These efforts have produced only moderate improvements. This paper presents both laboratory and field data to illustrate the benefits of applying a mirror polished surface to the face of PDC cutters in drilling stressed formations. These cutters are similar to traditional PDC cutters, with the exception of the reflective mirror finish, applied to the diamond table surfaces prior to their installation in the bit. Resultsmore » of tests conducted in a single point cutter apparatus and a full-scale drilling simulator will be presented and discussed. Field results will be presented that demonstrate the effectiveness of polished cutters, in both water and oil-based muds. Increases in penetration rates of 300-400% have been observed in the Wilcox formation and other highly pressured shales. Typically, the beneficial effects of polished cutters have been realized at depths greater than 7000 ft, and with mud weights exceeding 12 ppg.« less
Experimental research of adaptive OFDM and OCT precoding with a high SE for VLLC system
NASA Astrophysics Data System (ADS)
Liu, Shuang-ao; He, Jing; Chen, Qinghui; Deng, Rui; Zhou, Zhihua; Chen, Shenghai; Chen, Lin
2017-09-01
In this paper, an adaptive orthogonal frequency division multiplexing (OFDM) modulation scheme with 128/64/32/16-quadrature amplitude modulation (QAM) and orthogonal circulant matrix transform (OCT) precoding is proposed and experimentally demonstrated for a visible laser light communication (VLLC) system with a cost-effective 450-nm blue-light laser diode (LD). The performance of OCT precoding is compared with conventional the adaptive Discrete Fourier Transform-spread (DFT-spread) OFDM scheme, 32 QAM OCT precoding OFDM scheme, 64 QAM OCT precoding OFDM scheme and adaptive OCT precoding OFDM scheme. The experimental results show that OCT precoding can achieve a relatively flat signal-to-noise ratio (SNR) curve, and it can provide performance improvement in bit error rate (BER). Furthermore, the BER of the proposed OFDM signal with a raw bit rate 5.04 Gb/s after 5-m free space transmission is less than 20% of soft-decision forward error correlation (SD-FEC) threshold of 2.4 × 10-2, and the spectral efficiency (SE) of 4.2 bit/s/Hz can be successfully achieved.
Perceptually tuned low-bit-rate video codec for ATM networks
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien
1996-02-01
In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.
Utilizing a language model to improve online dynamic data collection in P300 spellers.
Mainsah, Boyla O; Colwell, Kenneth A; Collins, Leslie M; Throckmorton, Chandra S
2014-07-01
P300 spellers provide a means of communication for individuals with severe physical limitations, especially those with locked-in syndrome, such as amyotrophic lateral sclerosis. However, P300 speller use is still limited by relatively low communication rates due to the multiple data measurements that are required to improve the signal-to-noise ratio of event-related potentials for increased accuracy. Therefore, the amount of data collection has competing effects on accuracy and spelling speed. Adaptively varying the amount of data collection prior to character selection has been shown to improve spelling accuracy and speed. The goal of this study was to optimize a previously developed dynamic stopping algorithm that uses a Bayesian approach to control data collection by incorporating a priori knowledge via a language model. Participants ( n = 17) completed online spelling tasks using the dynamic stopping algorithm, with and without a language model. The addition of the language model resulted in improved participant performance from a mean theoretical bit rate of 46.12 bits/min at 88.89% accuracy to 54.42 bits/min ( ) at 90.36% accuracy.
Koppa, Santosh; Mohandesi, Manouchehr; John, Eugene
2016-12-01
Power consumption is one of the key design constraints in biomedical devices such as pacemakers that are powered by small non rechargeable batteries over their entire life time. In these systems, Analog to Digital Convertors (ADCs) are used as interface between analog world and digital domain and play a key role. In this paper we present the design of an 8-bit Charge Redistribution Successive Approximation Register (CR-SAR) analog to digital converter in standard TSMC 0.18μm CMOS technology for low power and low data rate devices such as pacemakers. The 8-bit optimized CR-SAR ADC achieves low power of less than 250nW with conversion rate of 1KB/s. This ADC achieves integral nonlinearity (INL) and differential nonlinearity (DNL) less than 0.22 least significant bit (LSB) and less than 0.04 LSB respectively as compared to the standard requirement for the INL and DNL errors to be less than 0.5 LSB. The designed ADC operates at 1V supply voltage converting input ranging from 0V to 250mV.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
Zhao, Anbang; Zeng, Caigao; Hui, Juan; Ma, Lin; Bi, Xuejie
2017-01-01
This paper proposes a composite channel virtual time reversal mirror (CCVTRM) for vertical sensor array (VSA) processing and applies it to long-range underwater acoustic (UWA) communication in shallow water. Because of weak signal-to-noise ratio (SNR), it is unable to accurately estimate the channel impulse response of each sensor of the VSA, thus the traditional passive time reversal mirror (PTRM) cannot perform well in long-range UWA communication in shallow water. However, CCVTRM only needs to estimate the composite channel of the VSA to accomplish time reversal mirror (TRM), which can effectively mitigate the inter-symbol interference (ISI) and reduce the bit error rate (BER). In addition, the calculation of CCVTRM is simpler than traditional PTRM. An UWA communication experiment using a VSA of 12 sensors was conducted in the South China Sea. The experiment achieves a very low BER communication at communication rate of 66.7 bit/s over an 80 km range. The results of the sea trial demonstrate that CCVTRM is feasible and can be applied to long-range UWA communication in shallow water. PMID:28653976
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
NASA Technical Reports Server (NTRS)
1972-01-01
The conceptual design of a highly reliable 10 to the 8th power-bit bubble domain memory for the space program is described. The memory has random access to blocks of closed-loop shift registers, and utilizes self-contained bubble domain chips with on-chip decoding. Trade-off studies show that the highest reliability and lowest power dissipation is obtained when the memory is organized on a bit-per-chip basis. The final design has 800 bits/register, 128 registers/chip, 16 chips/plane, and 112 planes, of which only seven are activated at a time. A word has 64 data bits +32 checkbits, used in a 16-adjacent code to provide correction of any combination of errors in one plane. 100 KHz maximum rotational frequency keeps power low (equal to or less than, 25 watts) and also allows asynchronous operation. Data rate is 6.4 megabits/sec, access time is 200 msec to an 800-word block and an additional 4 msec (average) to a word. The fabrication and operation are also described for a 64-bit bubble domain memory chip designed to test the concept of on-chip magnetic decoding. Access to one of the chip's four shift registers for the read, write, and clear functions is by means of bubble domain decoders utilizing the interaction between a conductor line and a bubble.
Real-time echocardiogram transmission protocol based on regions and visualization modes.
Cavero, Eva; Alesanco, Álvaro; García, José
2014-09-01
This paper proposes an Echocardiogram Transmission Protocol (ETP) for real-time end-to-end transmission of echocardiograms over IP networks. The ETP has been designed taking into account the echocardiogram characteristics of each visualized region, encoding each region according to its data type, visualization characteristics and diagnostic importance in order to improve the coding and thus the transmission efficiency. Furthermore, each region is sent separately and different error protection techniques can be used for each region. This leads to an efficient use of resources and provides greater protection for those regions with more clinical information. Synchronization is implemented for regions that change over time. The echocardiogram composition is different for each device. The protocol is valid for all echocardiogram devices thanks to the incorporation of configuration information which includes the composition of the echocardiogram. The efficiency of the ETP has been proved in terms of the number of bits sent with the proposed protocol. The codec and transmission rates used for the regions of interest have been set according to previous recommendations. Although the saving in the codified bits depends on the video composition, a coding gain higher than 7% with respect to without using ETP has been achieved.
Optical digital to analog conversion performance analysis for indoor set-up conditions
NASA Astrophysics Data System (ADS)
Dobesch, Aleš; Alves, Luis Nero; Wilfert, Otakar; Ribeiro, Carlos Gaspar
2017-10-01
In visible light communication (VLC) the optical digital to analog conversion (ODAC) approach was proposed as a suitable driving technique able to overcome light-emitting diode's (LED) non-linear characteristic. This concept is analogous to an electrical digital-to-analog converter (EDAC). In other words, digital bits are binary weighted to represent an analog signal. The method supports elementary on-off based modulations able to exploit the essence of LED's non-linear characteristic allowing simultaneous lighting and communication. In the ODAC concept the reconstruction error does not simply rely upon the converter bit depth as in case of EDAC. It rather depends on communication system set-up and geometrical relation between emitter and receiver as well. The paper describes simulation results presenting the ODAC's error performance taking into account: the optical channel, the LED's half power angle (HPA) and the receiver field of view (FOV). The set-up under consideration examines indoor conditions for a square room with 4 m length and 3 m height, operating with one dominant wavelength (blue) and having walls with a reflection coefficient of 0.8. The achieved results reveal that reconstruction error increases for higher data rates as a result of interference due to multipath propagation.
NASA Technical Reports Server (NTRS)
Svoboda, James S.; Kachmar, Brian A.
1993-01-01
The design and performance of a rain fade simulation/counteraction system on a laboratory simulated 30/20 GHz, time division multiple access (TDMA) satellite communications testbed is evaluated. Severe rain attenuation of electromagnetic radiation at 30/20 GHz occurs due to the carrier wavelength approaching the water droplet size. Rain in the downlink path lowers the signal power present at the receiver, resulting in a higher number of bit errors induced in the digital ground terminal. The laboratory simulation performed at NASA Lewis Research Center uses a programmable PIN diode attenuator to simulate 20 GHz satellite downlink geographic rain fade profiles. A computer based network control system monitors the downlink power and informs the network of any power threshold violations, which then prompts the network to issue commands that temporarily increase the gain of the satellite based traveling wave tube (TWT) amplifier. After the rain subsides, the network returns the TWT to the normal energy conserving power mode. Bit error rate (BER) data taken at the receiving ground terminal serves as a measure of the severity of rain degradation, and also evaluates the extent to which the network can improve the faded channel.
Carty, Paul; Cooper, Michael R; Barr, Alan; Neitzel, Richard L; Balmes, John; Rempel, David
2017-07-01
Hammer drills are used extensively in commercial construction for drilling into concrete for tasks including rebar installation for structural upgrades and anchor bolt installation. This drilling task can expose workers to respirable silica dust and noise. The aim of this pilot study was to evaluate the effects of bit wear on respirable silica dust, noise, and drilling productivity. Test bits were worn to three states by drilling consecutive holes to different cumulative drilling depths: 0, 780, and 1560 cm. Each state of bit wear was evaluated by three trials (nine trials total). For each trial, an automated laboratory test bench system drilled 41 holes 1.3 cm diameter, and 10 cm deep into concrete block at a rate of one hole per minute using a commercially available hammer drill and masonry bits. During each trial, dust was continuously captured by two respirable and one inhalable sampling trains and noise was sampled with a noise dosimeter. The room was thoroughly cleaned between trials. When comparing results for the sharp (0 cm) versus dull bit (1560 cm), the mean respirable silica increased from 0.41 to 0.74 mg m-3 in sampler 1 (P = 0.012) and from 0.41 to 0.89 mg m-3 in sampler 2 (P = 0.024); levels above the NIOSH recommended exposure limit of 0.05 mg m-3. Likewise, mean noise levels increased from 112.8 to 114.4 dBA (P < 0.00001). Drilling productivity declined with increasing wear from 10.16 to 7.76 mm s-1 (P < 0.00001). Increasing bit wear was associated with increasing respirable silica dust and noise and reduced drilling productivity. The levels of dust and noise produced by these experimental conditions would require dust capture, hearing protection, and possibly respiratory protection. The findings support the adoption of a bit replacement program by construction contractors. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
1975-01-01
in the computer in 16 bit parallel computer DIO transfers at the max- imum computer I/O speed. it then transmits this data in a bit- serial echo...maximum DIO rate under computer interrupt control. The LCI also provides station interrupt information for transfer to the computer under computer...been in daily operation since 1973. The SAM-D Missile system is currently in the Engineering De - velopment phase which precedes the Production and
VINSON/AUTOVON Interface Applique for the Modem, Digital Data, AN/GSC-38
1980-11-01
Measurement Indication Result Before Step 6 None Noise and beeping are heard in handset After Step 7 None Noise and beepi ng disappear Condition Measurement...linear range due to the compression used. Lowering the levels below the compression range may give increased linearity, but may cause signal-to- noise ...are encountered where the bit error rate at 16 KB/S results is objectionable audio noise or causes the KY-58 to squelch. On these channels the bit
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.
Li, Mengfan; Li, Wei; Zhou, Huihui
2016-02-01
Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.
Miniaturized module for the wireless transmission of measurements with Bluetooth.
Roth, H; Schwaibold, M; Moor, C; Schöchlin, J; Bolz, A
2002-01-01
The wiring of patients for obtaining medical measurements has many disadvantages. In order to limit these, a miniaturized module was developed which digitalizes analog signals and sends the signal wirelessly to the receiver using Bluetooth. Bluetooth is especially suitable for this application because distances of up to 10 m are possible with low power consumption and robust transmission with encryption. The module consists of a Bluetooth chip, which is initialized in such a way by a microcontroller that connections from other bluetooth receivers can be accepted. The signals are then transmitted to the distant end. The maximum bit rate of the 23 mm x 30 mm module is 73.5 kBit/s. At 4.7 kBit/s, the current consumption is 12 mA.
NASA Astrophysics Data System (ADS)
Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro
2012-09-01
It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.
Performance of convolutionally encoded noncoherent MFSK modem in fading channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1976-01-01
The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
Motion-Compensated Compression of Dynamic Voxelized Point Clouds.
De Queiroz, Ricardo L; Chou, Philip A
2017-05-24
Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.
FPGA based digital phase-coding quantum key distribution system
NASA Astrophysics Data System (ADS)
Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu
2015-12-01
Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
TANDIR: projectile warning system using uncooled bolometric technology
NASA Astrophysics Data System (ADS)
Horovitz-Limor, Z.; Zahler, M.
2007-04-01
Following the demand for affordable, various range and light-weight protection against ATGM's, Elisra develops a cost-effective passive IR system for ground vehicles. The system is based on wide FOV uncooled bolometric sensors with full azimuth coverage and a lightweight processing & control unit. The system design is based on the harsh environmental conditions. The basic algorithm discriminates the target from its clutter and predicts the time to impact (TTI) and the target aiming direction with relation to vehicle. The current detector format is 320*240 pixels and frame rate is 60 Hz, Spectral response is on Far Infrared (8-14μ). The digital video output has 14bit resolution & wide dynamic range. Future goal is to enhance detection performance by using large format uncooled detector (640X480) with improved sensitivity and higher frame rates (up to 120HZ).
NASA Astrophysics Data System (ADS)
Schudlo, Larissa C.; Chau, Tom
2015-12-01
Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Effect of crosstalk on QBER in QKD in urban telecommunication fiber lines
NASA Astrophysics Data System (ADS)
Kurochkin, Vladimir L.; Kurochkin, Yuriy V.; Miller, Alexander V.; Sokolov, Alexander S.; Kanapin, Alan A.
2016-12-01
Quantum key distribution (QKD) as a technology is being actively implemented into existing urban telecommunication networks. QKD devices are commercially available products. While sending single photons through optical fiber, adjacent fibers, which are used to transfer classical information, might influence the amount of registrations of single photon detectors. This influence is registered, since it directly introduces a higher quantum bit error rate (QBER) into the final key [1-3]. Our report presents the results of the first tests of the QKD device, developed in the Russian Quantum Center. These tests were conducted in Moscow, and are the first of such a device in Russia in urban optical fiber telecommunication networks. The device in question is based on a two-pass auto-compensating optical scheme, which provides stable single photon transfer through urban optical fiber telecommunication networks [4,5]. The single photon detectors ID230 by ID Quantique were used. They operate in free-running mode, and with a quantum effectiveness of 10 % have a dark count 10 Hz. The background signal level in the dedicated fiber was no less than 5.6•10-14 W, which corresponds to 4.4•104 detector clicks per second. The single mode fiber length in Moscow was 30.6 km, the total attenuation equal to 11.7 dB. The sifted quantum key bit rate reached values of 1.9 kbit/s with the QBER level equal to 5.1 %. Methods of lowering the influence of crosstalk on the QBER are considered.
NASA Technical Reports Server (NTRS)
Murray, G. W.; Bohning, O. D.; Kinoshita, R. Y.; Becker, F. J.
1979-01-01
The results are summarized of a program to demonstrate the feasibility of Bubble Domain Memory Technology as a mass memory medium for spacecraft applications. The design, fabrication and test of a partially populated 10 to the 8th power Bit Data Recorder using 100 Kbit serial bubble memory chips is described. Design tradeoffs, design approach and performance are discussed. This effort resulted in a 10 to the 8th power bit recorder with a volume of 858.6 cu in and a weight of 47.2 pounds. The recorder is plug reconfigurable, having the capability of operating as one, two or four independent serial channel recorders or as a single sixteen bit byte parallel input recorder. Data rates up to 1.2 Mb/s in a serial mode and 2.4 Mb/s in a parallel mode may be supported. Fabrication and test of the recorder demonstrated the basic feasibility of Bubble Domain Memory technology for such applications. Test results indicate the need for improvement in memory element operating temperature range and detector performance.
A high SFDR 6-bit 20-MS/s SAR ADC based on time-domain comparator
NASA Astrophysics Data System (ADS)
Xue, Han; Hua, Fan; Qi, Wei; Huazhong, Yang
2013-08-01
This paper presents a 6-bit 20-MS/s high spurious-free dynamic range (SFDR) and low power successive approximation register analog to digital converter (SAR ADC) for the radio-frequency (RF) transceiver front-end, especially for wireless sensor network (WSN) applications. This ADC adopts the modified common-centroid symmetry layout and the successive approximation register reset circuit to improve the linearity and dynamic range. Prototyped in a 0.18-μm 1P6M CMOS technology, the ADC performs a peak SFDR of 55.32 dB and effective number of bits (ENOB) of 5.1 bit for 10 MS/s. At the sample rate of 20 MS/s and the Nyquist input frequency, the 47.39-dB SFDR and 4.6-ENOB are achieved. The differential nonlinearity (DNL) is less than 0.83 LSB and the integral nonlinearity (INL) is less than 0.82 LSB. The experimental results indicate that this SAR ADC consumes a total of 522 μW power and occupies 0.98 mm2.
VLSI design of an RSA encryption/decryption chip using systolic array based architecture
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi
2016-09-01
This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.
NASA Astrophysics Data System (ADS)
Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng
2018-01-01
In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.
Compact FPGA-based beamformer using oversampled 1-bit A/D converters.
Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt
2005-05-01
A compact medical ultrasound beamformer architecture that uses oversampled 1-bit analog-to-digital (A/D) converters is presented. Sparse sample processing is used, as the echo signal for the image lines is reconstructed in 512 equidistant focal points along the line through its in-phase and quadrature components. That information is sufficient for presenting a B-mode image and creating a color flow map. The high sampling rate provides the necessary delay resolution for the focusing. The low channel data width (1-bit) makes it possible to construct a compact beamformer logic. The signal reconstruction is done using finite impulse reponse (FIR) filters, applied on selected bit sequences of the delta-sigma modulator output stream. The approach allows for a multichannel beamformer to fit in a single field programmable gate array (FPGA) device. A 32-channel beamformer is estimated to occupy 50% of the available logic resources in a commercially available mid-range FPGA, and to be able to operate at 129 MHz. Simulation of the architecture at 140 MHz provides images with a dynamic range approaching 60 dB for an excitation frequency of 3 MHz.
Areal density optimizations for heat-assisted magnetic recording of high-density media
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk
2016-06-01
Heat-assisted magnetic recording (HAMR) is hoped to be the future recording technique for high-density storage devices. Nevertheless, there exist several realization strategies. With a coarse-grained Landau-Lifshitz-Bloch model, we investigate in detail the benefits and disadvantages of a continuous and pulsed laser spot recording of shingled and conventional bit-patterned media. Additionally, we compare single-phase grains and bits having a bilayer structure with graded Curie temperature, consisting of a hard magnetic layer with high TC and a soft magnetic one with low TC, respectively. To describe the whole write process as realistically as possible, a distribution of the grain sizes and Curie temperatures, a displacement jitter of the head, and the bit positions are considered. For all these cases, we calculate bit error rates of various grain patterns, temperatures, and write head positions to optimize the achievable areal storage density. Within our analysis, shingled HAMR with a continuous laser pulse moving over the medium reaches the best results and thus has the highest potential to become the next-generation storage device.
How the optic nerve allocates space, energy capacity, and information
Perge, Janos A.; Koch, Kristin; Miller, Robert; Sterling, Peter; Balasubramanian, Vijay
2009-01-01
Fiber tracts should use space and energy efficiently because both resources constrain neural computation. We found for a myelinated tract (optic nerve) that astrocytes use nearly 30% of the space and more than 70% of the mitochondria, establishing the significance of astrocytes for the brain’s space and energy budgets. Axons are mostly thin with a skewed distribution peaking at 0.7µm, near the lower limit set by channel noise. This distribution is matched closely by the distribution of mean firing rates measured under naturalistic conditions, suggesting that firing rate increases proportionally with axon diameter. In axons thicker than 0.7µm mitochondria occupy a constant fraction of axonal volume -- thus, mitochondrial volumes rise as the diameter squared. These results imply a law of diminishing returns: twice the information rate requires more than twice the space and energy capacity. We conclude that the optic nerve conserves space and energy by sending most information at low rates over fine axons with small terminal arbors, and sending some information at higher rates over thicker axons with larger terminal arbors – but only where more bits/s are needed for a specific purpose. Thicker axons seem to be needed, not for their greater conduction velocity (nor other intrinsic electrophysiological purpose), but instead to support larger terminal arbors and more active zones that transfer information synaptically at higher rates. PMID:19535603
Yeh, C H; Chow, C W; Chen, H Y; Chen, J; Liu, Y L
2014-04-21
We propose and experimentally demonstrate a white-light phosphor-LED visible light communication (VLC) system with an adaptive 84.44 to 190 Mbit/s 16 quadrature-amplitude-modulation (QAM) orthogonal-frequency-division-multiplexing (OFDM) signal utilizing bit-loading method. Here, the optimal analogy pre-equalization design is performed at LED transmitter (Tx) side and no blue filter is used at the Rx side. Hence, the ~1 MHz modulation bandwidth of phosphor-LED could be extended to 30 MHz. In addition, the measured bit error rates (BERs) of < 3.8 × 10(-3) [forward error correction (FEC) threshold] at different measured data rates can be achieved at practical transmission distances of 0.75 to 2 m.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)
1998-01-01
The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.
Scaffardi, Mirco; Malik, Muhammad N; Lazzeri, Emma; Klitis, Charalambos; Meriggi, Laura; Zhang, Ning; Sorel, Marc; Bogoni, Antonella
2017-10-01
A silicon-on-insulator microring with three superimposed gratings is proposed and characterized as a device enabling 3×3 optical switching based on orbital angular momentum and wavelength as switching domains. Measurements show penalties with respect to the back-to-back of <1 dB at a bit error rate of 10 -9 for OOK traffic up to 20 Gbaud. Different switch configuration cases are implemented, with measured power penalty variations of less than 0.5 dB at bit error rates of 10 -9 . An analysis is also carried out to highlight the dependence of the number of switch ports on the design parameters of the multigrating microring.
NASA Astrophysics Data System (ADS)
Diamanti, Eleni; Takesue, Hiroki; Langrock, Carsten; Fejer, M. M.; Yamamoto, Yoshihisa
2006-12-01
We present a quantum key distribution experiment in which keys that were secure against all individual eavesdropping attacks allowed by quantum mechanics were distributed over 100 km of optical fiber. We implemented the differential phase shift quantum key distribution protocol and used low timing jitter 1.55 µm single-photon detectors based on frequency up-conversion in periodically poled lithium niobate waveguides and silicon avalanche photodiodes. Based on the security analysis of the protocol against general individual attacks, we generated secure keys at a practical rate of 166 bit/s over 100 km of fiber. The use of the low jitter detectors also increased the sifted key generation rate to 2 Mbit/s over 10 km of fiber.
NASA Astrophysics Data System (ADS)
Gao, Shanghua; Xue, Bing
2017-04-01
The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Fast Face-Recognition Optical Parallel Correlator Using High Accuracy Correlation Filter
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Kodate, Kashiko
2005-11-01
We designed and fabricated a fully automatic fast face recognition optical parallel correlator [E. Watanabe and K. Kodate: Appl. Opt. 44 (2005) 5666] based on the VanderLugt principle. The implementation of an as-yet unattained ultra high-speed system was aided by reconfiguring the system to make it suitable for easier parallel processing, as well as by composing a higher accuracy correlation filter and high-speed ferroelectric liquid crystal-spatial light modulator (FLC-SLM). In running trial experiments using this system (dubbed FARCO), we succeeded in acquiring remarkably low error rates of 1.3% for false match rate (FMR) and 2.6% for false non-match rate (FNMR). Given the results of our experiments, the aim of this paper is to examine methods of designing correlation filters and arranging database image arrays for even faster parallel correlation, underlining the issues of calculation technique, quantization bit rate, pixel size and shift from optical axis. The correlation filter has proved its excellent performance and higher precision than classical correlation and joint transform correlator (JTC). Moreover, arrangement of multi-object reference images leads to 10-channel correlation signals, as sharply marked as those of a single channel. This experiment result demonstrates great potential for achieving the process speed of 10000 face/s.
A 128K-bit CCD buffer memory system
NASA Technical Reports Server (NTRS)
Siemens, K. H.; Wallace, R. W.; Robinson, C. R.
1976-01-01
A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2013-12-01
Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies - BitCoin - have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years - digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia - and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.
Kristoufek, Ladislav
2013-12-04
Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.
AESA diagnostics in operational environments
NASA Astrophysics Data System (ADS)
Hull, W. P.
The author discusses some possible solutions to ASEA (active electronically scanned array) diagnostics in the operational environment using built-in testing (BIT), which can play a key role in reducing life-cycle cost if accurately implemented. He notes that it is highly desirable to detect and correct in the operational environment all degradation that impairs mission performance. This degradation must be detected with low false alarm rate and the appropriate action initiated consistent with low life-cycle cost. Mutual coupling is considered as a BIT signal injection method and is shown to have potential. However, the limits of the diagnostic capability using this method clearly depend on its stability and on the level of multipath for a specific application. BIT using mutual coupling may need to be supplemented on the ground by an externally mounted passive antenna that interfaces with onboard avionics.
Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.
Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu
2017-03-15
The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.
Wireless visual sensor network resource allocation using cross-layer optimization
NASA Astrophysics Data System (ADS)
Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.
2009-01-01
In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
NASA Astrophysics Data System (ADS)
Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi
2018-02-01
The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.
Traffic Management in ATM Networks Over Satellite Links
NASA Technical Reports Server (NTRS)
Goyal, Rohit; Jain, Raj; Goyal, Mukul; Fahmy, Sonia; Vandalore, Bobby; vonDeak, Thomas
1999-01-01
This report presents a survey of the traffic management Issues in the design and implementation of satellite Asynchronous Transfer Mode (ATM) networks. The report focuses on the efficient transport of Transmission Control Protocol (TCP) traffic over satellite ATM. First, a reference satellite ATM network architecture is presented along with an overview of the service categories available in ATM networks. A delay model for satellite networks and the major components of delay and delay variation are described. A survey of design options for TCP over Unspecified Bit Rate (UBR), Guaranteed Frame Rate (GFR) and Available Bit Rate (ABR) services in ATM is presented. The main focus is on traffic management issues. Several recommendations on the design options for efficiently carrying data services over satellite ATM networks are presented. Most of the results are based on experiments performed on Geosynchronous (GEO) latencies. Some results for Low Earth Orbits (LEO) and Medium Earth Orbit (MEO) latencies are also provided.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
NASA Astrophysics Data System (ADS)
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
Link performance optimization for digital satellite broadcasting systems
NASA Astrophysics Data System (ADS)
de Gaudenzi, R.; Elia, C.; Viola, R.
The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.
Self-optimization and auto-stabilization of receiver in DPSK transmission system.
Jang, Y S
2008-03-17
We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
Chaos-on-a-chip secures data transmission in optical fiber links.
Argyris, Apostolos; Grivas, Evangellos; Hamacher, Michael; Bogris, Adonis; Syvridis, Dimitris
2010-03-01
Security in information exchange plays a central role in the deployment of modern communication systems. Besides algorithms, chaos is exploited as a real-time high-speed data encryption technique which enhances the security at the hardware level of optical networks. In this work, compact, fully controllable and stably operating monolithic photonic integrated circuits (PICs) that generate broadband chaotic optical signals are incorporated in chaos-encoded optical transmission systems. Data sequences with rates up to 2.5 Gb/s with small amplitudes are completely encrypted within these chaotic carriers. Only authorized counterparts, supplied with identical chaos generating PICs that are able to synchronize and reproduce the same carriers, can benefit from data exchange with bit-rates up to 2.5Gb/s with error rates below 10(-12). Eavesdroppers with access to the communication link experience a 0.5 probability to detect correctly each bit by direct signal detection, while eavesdroppers supplied with even slightly unmatched hardware receivers are restricted to data extraction error rates well above 10(-3).
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Performance Measurement of a Multi-Level/Analog Ferroelectric Memory Device Design
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.
2007-01-01
Increasing the memory density and utilizing the unique characteristics of ferroelectric devices is important in making ferroelectric memory devices more desirable to the consumer. This paper describes the characterization of a design that allows multiple levels to be stored in a ferroelectric based memory cell. It can be used to store multiple bits or analog values in a high speed nonvolatile memory. The design utilizes the hysteresis characteristic of ferroelectric transistors to store an analog value in the memory cell. The design also compensates for the decay of the polarization of the ferroelectric material over time. This is done by utilizing a pair of ferroelectric transistors to store the data. One transistor is used a reference to determinethe amount of decay that has occurred since the pair was programmed. The second transistor stores the analog value as a polarization value between zero and saturated. The design allows digital data to be stored as multiple bits in each memory cell. The number of bits per cell that can be stored will vary with the decay rate of the ferroelectric transistors and the repeatability of polarization between transistors. This paper presents measurements of an actual prototype memory cell. This prototype is not a complete implementation of a device, but instead, a prototype of the storage and retrieval portion of an actual device. The performance of this prototype is presented with the projected performance of the overall device. This memory design will be useful because it allows higher memory density, compensates for the environmental and ferroelectric aging processes, allows analog values to be directly stored in memory, compensates for the thermal and radiation environments associated with space operations, and relies only on existing technologies.
Magnetic printing characteristics using master disk with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Fujiwara, Naoto; Nishida, Yoichi; Ishioka, Toshihide; Sugita, Ryuji; Yasunaga, Tadashi
With the increase in recording density and capacity of hard-disk drives (HDD), high speed, high precision and low cost servo writing method has become an issue in HDD industry. The magnetic printing was proposed as the ultimate solution for this issue [1-3]. There are two types of magnetic printing methods, which are 'Bit Printing (BP)' and 'Edge Printing (EP)'. BP method is conducted by applying external field whose direction is vertical to the plane of both master disk (Master) and perpendicular magnetic recording (PMR) media (Slave). On the other hand, EP method is conducted by applying external field toward down track direction of both master and slave. In BP for bit length shorter than 100 nm, the SNR of perpendicular anisotropic master was higher than isotropic master. And the SNR of EP for the bit length shorter than 50 nm was demonstrated.
A microprocessor based on a two-dimensional semiconductor.
Wachter, Stefan; Polyushkin, Dmitry K; Bethge, Ole; Mueller, Thomas
2017-04-11
The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III-V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor-molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.
A microprocessor based on a two-dimensional semiconductor
NASA Astrophysics Data System (ADS)
Wachter, Stefan; Polyushkin, Dmitry K.; Bethge, Ole; Mueller, Thomas
2017-04-01
The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III-V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor--molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.
The Buried in Treasures Workshop: waitlist control trial of facilitated support groups for hoarding.
Frost, Randy O; Ruby, Dylan; Shuer, Lee J
2012-11-01
Hoarding is a serious form of psychopathology that has been associated with significant health and safety concerns, as well as the source of social and economic burden (Tolin, Frost, Steketee, & Fitch, 2008; Tolin, Frost, Steketee, Gray, & Fitch, 2008). Recent developments in the treatment of hoarding have met with some success for both individual and group treatments. Nevertheless, the cost and limited accessibility of these treatments leave many hoarding sufferers without options for help. One alternative is support groups that require relatively few resources. Frost, Pekareva-Kochergina, and Maxner (2011) reported significant declines in hoarding symptoms following a non-professionally run 13-week support group (The Buried in Treasures [BIT] Workshop). The BIT Workshop is a highly structured and short term support group. The present study extended these findings by reporting on the results of a waitlist control trial of the BIT Workshop. Significant declines in all hoarding symptom measures were observed compared to a waitlist control. The treatment response rate for the BIT Workshop was similar to that obtained by previous individual and group treatment studies, despite its shorter length and lack of a trained therapist. The BIT Workshop may be an effective adjunct to cognitive behavior therapy for hoarding disorder, or an alternative when cognitive behavior therapy is inaccessible. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ullah, Rahat; Liu, Bo; Zhang, Qi; Saad Khan, Muhammad; Ahmad, Ibrar; Ali, Amjad; Khan, Razaullah; Tian, Qinghua; Yan, Cheng; Xin, Xiangjun
2016-09-01
An architecture for flattened and broad spectrum multicarriers is presented by generating 60 comb lines from pulsed laser driven by user-defined bit stream in cascade with three modulators. The proposed scheme is a cost-effective architecture for optical line terminal (OLT) in wavelength division multiplexed passive optical network (WDM-PON) system. The optical frequency comb generator consists of a pulsed laser in cascade with a phase modulator and two Mach-Zehnder modulators driven by an RF source incorporating no phase shifter, filter, or electrical amplifier. Optical frequency comb generation is deployed in the simulation environment at OLT in WDM-PON system supports 1.2-Tbps data rate. With 10-GHz frequency spacing, each frequency tone carries data signal of 20 Gbps-based differential quadrature phase shift keying (DQPSK) in downlink transmission. We adopt DQPSK-based modulation technique in the downlink transmission because it supports 2 bits per symbol, which increases the data rate in WDM-PON system. Furthermore, DQPSK format is tolerant to different types of dispersions and has a high spectral efficiency with less complex configurations. Part of the downlink power is utilized in the uplink transmission; the uplink transmission is based on intensity modulated on-off keying. Minimum power penalties have been observed with excellent eye diagrams and other transmission performances at specified bit error rates.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
Optical Fiber Transmission In A Picture Archiving And Communication System For Medical Applications
NASA Astrophysics Data System (ADS)
Aaron, Gilles; Bonnard, Rene
1984-03-01
In an hospital, the need for an electronic communication network is increasing along with the digitization of pictures. This local area network is intended to link some picture sources such as digital radiography, computed tomography, nuclear magnetic resonance, ultrasounds etc...with an archiving system. Interactive displays can be used in examination rooms, physicians offices and clinics. In such a system, three major requirements must be considered : bit-rate, cable length, and number of devices. - The bit-rate is very important because a maximum response time of a few seconds must be guaranteed for several mega-bit pictures. - The distance between nodes may be a few kilometers in some large hospitals. - The number of devices connected to the network is never greater than a few tens because picture sources and computers represent important hardware, and simple displays can be concentrated. All these conditions are fulfilled by optical fiber transmissions. Depending on the topology and the access protocol, two solutions are to be considered - Active ring - Active or passive star Finally Thomson-CSF developments of optical transmission devices for large networks of TV distribution bring us a technological support and a mass produc-tion which will cut down hardware costs.
A four-dimensional virtual hand brain-machine interface using active dimension selection.
Rouse, Adam G
2016-06-01
Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
Adaptive limited feedback for interference alignment in MIMO interference channels.
Zhang, Yang; Zhao, Chenglin; Meng, Juan; Li, Shibao; Li, Li
2016-01-01
It is very important that the radar sensor network has autonomous capabilities such as self-managing, etc. Quite often, MIMO interference channels are applied to radar sensor networks, and for self-managing purpose, interference management in MIMO interference channels is critical. Interference alignment (IA) has the potential to dramatically improve system throughput by effectively mitigating interference in multi-user networks at high signal-to-noise (SNR). However, the implementation of IA predominantly relays on perfect and global channel state information (CSI) at all transceivers. A large amount of CSI has to be fed back to all transmitters, resulting in a proliferation of feedback bits. Thus, IA with limited feedback has been introduced to reduce the sum feedback overhead. In this paper, by exploiting the advantage of heterogeneous path loss, we first investigate the throughput of IA with limited feedback in interference channels while each user transmits multi-streams simultaneously, then we get the upper bound of sum rate in terms of the transmit power and feedback bits. Moreover, we propose a dynamic feedback scheme via bit allocation to reduce the throughput loss due to limited feedback. Simulation results demonstrate that the dynamic feedback scheme achieves better performance in terms of sum rate.
NASA Technical Reports Server (NTRS)
Dohi, Tomohiro; Nitta, Kazumasa; Ueda, Takashi
1993-01-01
This paper proposes a new type of coherent demodulator, the unique-word (UW)-reverse-modulation type demodulator, for burst signal controlled by voice operated transmitter (VOX) in mobile satellite communication channels. The demodulator has three individual circuits: a pre-detection signal combiner, a pre-detection UW detector, and a UW-reverse-modulation type demodulator. The pre-detection signal combiner combines signal sequences received by two antennas and improves bit energy-to-noise power density ratio (E(sub b)/N(sub 0)) 2.5 dB to yield 10(exp -3) average bit error rate (BER) when carrier power-to-multipath power ratio (CMR) is 15 dB. The pre-detection UW detector improves UW detection probability when the frequency offset is large. The UW-reverse-modulation type demodulator realizes a maximum pull-in frequency of 3.9 kHz, the pull-in time is 2.4 seconds and frequency error is less than 20 Hz. The performances of this demodulator are confirmed through computer simulations and its effect is clarified in real-time experiments at a bit rate of 16.8 kbps using a digital signal processor (DSP).
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
DOE Office of Scientific and Technical Information (OSTI.GOV)
TerraTek, A Schlumberger Company
2008-12-31
The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill 'faster and deeper' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the 'ultra-high rotary speed drilling system' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm - usually well below 5,000 rpm. This document provides the progress through two phases of the program entitled 'Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling' for the period starting 30 June 2003 and concluding 31 March 2009. The accomplishments of Phases 1 and 2 are summarized as follows: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance (see Black and Judzis); (2) TerraTek designed and planned Phase I bench scale experiments (See Black and Judzis). Improvements were made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs were developed to provided a more consistent product with consistent performance. A test matrix for the final core bit testing program was completed; (3) TerraTek concluded small-scale cutting performance tests; (4) Analysis of Phase 1 data indicated that there is decreased specific energy as the rotational speed increases; (5) Technology transfer, as part of Phase 1, was accomplished with technical presentations to the industry (see Judzis, Boucher, McCammon, and Black); (6) TerraTek prepared a design concept for the high speed drilling test stand, which was planned around the proposed high speed mud motor concept. Alternative drives for the test stand were explored; a high speed hydraulic motor concept was finally used; (7) The high speed system was modified to accommodate larger drill bits than originally planned; (8) Prototype mud turbine motors and the high speed test stand were used to drive the drill bits at high speed; (9) Three different rock types were used during the testing: Sierra White granite, Crab Orchard sandstone, and Colton sandstone. The drill bits used included diamond impregnated bits, a polycrystalline diamond compact (PDC) bit, a thermally stable PDC (TSP) bit, and a hybrid TSP and natural diamond bit; and (10) The drill bits were run at rotary speeds up to 5500 rpm and weight on bit (WOB) to 8000 lbf. During Phase 2, the ROP as measured in depth of cut per bit revolution generally increased with increased WOB. The performance was mixed with increased rotary speed, with the depth cut with the impregnated drill bit generally increasing and the TSP and hybrid TSP drill bits generally decreasing. The ROP in ft/hr generally increased with all bits with increased WOB and rotary speed. The mechanical specific energy generally improved (decreased) with increased WOB and was mixed with increased rotary speed.« less
Frame synchronization methods based on channel symbol measurements
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.
1989-01-01
The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.
Improved classical and quantum random access codes
NASA Astrophysics Data System (ADS)
Liabøtrø, O.
2017-05-01
A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.
Kuribayashi, Ryuma; Nittono, Hiroshi
2017-01-01
High-resolution audio has a higher sampling frequency and a greater bit depth than conventional low-resolution audio such as compact disks. The higher sampling frequency enables inaudible sound components (above 20 kHz) that are cut off in low-resolution audio to be reproduced. Previous studies of high-resolution audio have mainly focused on the effect of such high-frequency components. It is known that alpha-band power in a human electroencephalogram (EEG) is larger when the inaudible high-frequency components are present than when they are absent. Traditionally, alpha-band EEG activity has been associated with arousal level. However, no previous studies have explored whether sound sources with high-frequency components affect the arousal level of listeners. The present study examined this possibility by having 22 participants listen to two types of a 400-s musical excerpt of French Suite No. 5 by J. S. Bach (on cembalo, 24-bit quantization, 192 kHz A/D sampling), with or without inaudible high-frequency components, while performing a visual vigilance task. High-alpha (10.5-13 Hz) and low-beta (13-20 Hz) EEG powers were larger for the excerpt with high-frequency components than for the excerpt without them. Reaction times and error rates did not change during the task and were not different between the excerpts. The amplitude of the P3 component elicited by target stimuli in the vigilance task increased in the second half of the listening period for the excerpt with high-frequency components, whereas no such P3 amplitude change was observed for the other excerpt without them. The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt. The present study shows that high-resolution audio that retains high-frequency components has an advantage over similar and indistinguishable digital sound sources in which such components are artificially cut off, suggesting that high-resolution audio with inaudible high-frequency components induces a relaxed attentional state without conscious awareness.
Laser Linewidth Requirements for Optical Bpsk and Qpsk Heterodyne Lightwave Systems.
NASA Astrophysics Data System (ADS)
Boukli-Hacene, Mokhtar
In this dissertation, optical Binary Phase-Shift Keying (BPSK) and Quadrature Phase-Shift Keying (QPSK) heterodyne communication receivers are investigated. The main objective of this research work is to analyze the performance of these receivers in the presence of laser phase noise and shot noise. The heterodyne optical BPSK is based on the square law carrier recovery (SLCR) scheme for phase detection. The BPSK heterodyne receiver is analyzed assuming a second order linear phase-locked loop (PLL) subsystem and a small phase error. The noise properties are analyzed and the problem of minimizing the effect of noise is addressed. The performance of the receiver is evaluated in terms of the bit error rate (BER), which leads to the analysis of the BER versus the laser linewidth and the number of photons/bit to achieve good performance. Since we cannot track the pure carrier component in the presence of noise, a non-linear model is used to solve the problem of recovery of the carrier. The non -linear system is analyzed in the presence of a low signal -to-noise ratio (SNR). The non-Gaussian noise model represented by its probability density function (PDF) is used to analyze the performance of the receiver, especially the phase error. In addition the effect of the PLL is analyzed by studying the cycle slippage (cs). Finally, the research effort is expanded from BPSK to QPSK systems. The heterodyne optical QPSK based on the fourth power multiplier scheme (FPMS) in conjunction with linear and non-linear PLL model is investigated. Optimum loop and higher power penalty in the presence of phase noise and shot noise are analyzed. It is shown that the QPSK system yields a high speed and high sensitivity coherent means for transmission of information accompanied by a small degradation in the laser linewidth. Comparative analysis of BPSK and QPSK systems leads us to conclude that in terms of laser linewidth, bit rate, phase error and power penalty, the QPSK system is more sensitive than the BPSK system and suffers less from higher power penalty. The BPSK and QPSK heterodyne receivers used in the uncoded scheme demand a realistic laser linewidth. Since the laser linewidth is the critical measure of the performance of a receiver, a convolutional code applied to QPSK of the system is used to improve the sensitivity of the system. The effect of coding is particularly important as means of relaxing the laser linewidth requirement. The validity and usefulness of the analysis presented in the dissertation is supported by computer simulations.
Context dependent prediction and category encoding for DPCM image compression
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.
1989-01-01
Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.
New LWD tools are just in time to probe for baby elephants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghiselin, D.
Development of sophisticated formation evaluation instrumentation for use while drilling has led to a stratification of while-drilling services. Measurements while drilling (MWD) comprises measurements of mechanical parameters like weight-on-bit, mud pressures, torque, vibration, hole angle and direction. Logging while drilling (LWD) describes resistivity, sonic, and radiation logging which rival wireline measurements in accuracy. A critical feature of LWD is the rate that data can be telemetered to the surface. Early tools could only transmit 3 bits per second one way. In the last decade, the data rate has more than tripled. Despite these improvements, LWD tools have the ability tomore » make many more measurements than can be telemetered in real-time. The paper discusses the development of this technology and its applications.« less
Security of counterfactual quantum cryptography
NASA Astrophysics Data System (ADS)
Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Han, Zheng-Fu; Guo, Guang-Can
2010-10-01
Recently, a “counterfactual” quantum-key-distribution scheme was proposed by T.-G. Noh [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.103.230501 103, 230501 (2009)]. In this scheme, two legitimate distant peers may share secret keys even when the information carriers are not traveled in the quantum channel. We find that this protocol is equivalent to an entanglement distillation protocol. According to this equivalence, a strict security proof and the asymptotic key bit rate are both obtained when a perfect single-photon source is applied and a Trojan horse attack can be detected. We also find that the security of this scheme is strongly related to not only the bit error rate but also the yields of photons. And our security proof may shed light on the security of other two-way protocols.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.;
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Fronthaul evolution: From CPRI to Ethernet
NASA Astrophysics Data System (ADS)
Gomes, Nathan J.; Chanclou, Philippe; Turnbull, Peter; Magee, Anthony; Jungnickel, Volker
2015-12-01
It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved.
Goto, Nobuo; Miyazaki, Yasumitsu
2014-06-01
Optical switching of high-bit-rate quadrature-phase-shift-keying (QPSK) pulse trains using collinear acousto-optic (AO) devices is theoretically discussed. Since the collinear AO devices have wavelength selectivity, the switched optical pulse trains suffer from distortion when the bandwidth of the pulse train is comparable to the pass bandwidth of the AO device. As the AO device, a sidelobe-suppressed device with a tapered surface-acoustic-wave (SAW) waveguide and a Butterworth-type filter device with a lossy SAW directional coupler are considered. Phase distortion of optical pulse trains at 40 to 100 Gsymbols/s in QPSK format is numerically analyzed. Bit-error-rate performance with additive Gaussian noise is also evaluated by the Monte Carlo method.
Measurements of aperture averaging on bit-error-rate
NASA Astrophysics Data System (ADS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-08-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
The NEEDS Data Base Management and Archival Mass Memory System
NASA Technical Reports Server (NTRS)
Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.
1980-01-01
A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.
Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G
2008-11-24
An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.
Chen, Ming; He, Jing; Tang, Jin; Wu, Xian; Chen, Lin
2014-07-28
In this paper, a FPGAs-based real-time adaptively modulated 256/64/16QAM-encoded base-band OFDM transceiver with a high spectral efficiency up to 5.76bit/s/Hz is successfully developed, and experimentally demonstrated in a simple intensity-modulated direct-detection optical communication system. Experimental results show that it is feasible to transmit a raw signal bit rate of 7.19Gbps adaptively modulated real-time optical OFDM signal over 20km and 50km single mode fibers (SMFs). The performance comparison between real-time and off-line digital signal processing is performed, and the results show that there is a negligible power penalty. In addition, to obtain the best transmission performance, direct-current (DC) bias voltage for MZM and launch power into optical fiber links are explored in the real-time optical OFDM systems.
High range free space optic transmission using new dual diffuser modulation technique
NASA Astrophysics Data System (ADS)
Rahman, A. K.; Julai, N.; Jusoh, M.; Rashidi, C. B. M.; Aljunid, S. A.; Anuar, M. S.; Talib, M. F.; Zamhari, Nurdiani; Sahari, S. k.; Tamrin, K. F.; Jong, Rudiyanto P.; Zaidel, D. N. A.; Mohtadzar, N. A. A.; Sharip, M. R. M.; Samat, Y. S.
2017-11-01
Free space optical communication fsoc is vulnerable with fluctuating atmospheric. This paper focus analyzes the finding of new technique dual diffuser modulation (ddm) to mitigate the atmospheric turbulence effect. The performance of fsoc under the presence of atmospheric turbulence will cause the laser beam keens to (a) beam wander, (b) beam spreading and (c) scintillation. The most deteriorate the fsoc is scintillation where it affected the wavefront cause to fluctuating signal and ultimately receiver can turn into saturate or loss signal. Ddm approach enhances the detecting bit `1' and bit `0' and improves the power received to combat with turbulence effect. The performance focus on signal-to-noise (snr) and bit error rate (ber) where the numerical result shows that the ddm technique able to improves the range where estimated approximately 40% improvement under weak turbulence and 80% under strong turbulence.
Kristoufek, Ladislav
2013-01-01
Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies – BitCoin – have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years – digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia – and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value. PMID:24301322
Physical layer one-time-pad data encryption through synchronized semiconductor laser networks
NASA Astrophysics Data System (ADS)
Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris
2016-02-01
Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.
NASA Astrophysics Data System (ADS)
Frye, G. E.; Hauser, C. K.; Townsend, G.; Sellers, E. W.
2011-04-01
Since the introduction of the P300 brain-computer interface (BCI) speller by Farwell and Donchin in 1988, the speed and accuracy of the system has been significantly improved. Larger electrode montages and various signal processing techniques are responsible for most of the improvement in performance. New presentation paradigms have also led to improvements in bit rate and accuracy (e.g. Townsend et al (2010 Clin. Neurophysiol. 121 1109-20)). In particular, the checkerboard paradigm for online P300 BCI-based spelling performs well, has started to document what makes for a successful paradigm, and is a good platform for further experimentation. The current paper further examines the checkerboard paradigm by suppressing items which surround the target from flashing during calibration (i.e. the suppression condition). In the online feedback mode the standard checkerboard paradigm is used with a stepwise linear discriminant classifier derived from the suppression condition and one classifier derived from the standard checkerboard condition, counter-balanced. The results of this research demonstrate that using suppression during calibration produces significantly more character selections/min ((6.46) time between selections included) than the standard checkerboard condition (5.55), and significantly fewer target flashes are needed per selection in the SUP condition (5.28) as compared to the RCP condition (6.17). Moreover, accuracy in the SUP and RCP conditions remained equivalent (~90%). Mean theoretical bit rate was 53.62 bits/min in the suppression condition and 46.36 bits/min in the standard checkerboard condition (ns). Waveform morphology also showed significant differences in amplitude and latency.
Measurements of radioactivity and dose assessments in some building materials in Bitlis, Turkey.
Kayakökü, Halime; Karatepe, Şule; Doğru, Mahmut
2016-09-01
In this study, samples of perlite, pumice and Ahlat stones (Ignimbrite) extracted from mines in Bitlis and samples of other building materials produced in facilities in Bitlis were collected and analyzed. Activity concentrations of (226)Ra, (232)Th and (40)K in samples of building materials were measured using NaI detector (NaI(Tl)) with an efficiency of 24%. The radon measurements of building material samples were determined using CR-39 nuclear track detectors. (226)Ra, (232)Th and (40)K radioactivity concentrations ranged from (29.6±5.9 to 228.2±38.1Bq/kg), (10.8±5.4 to 95.5±26.1Bq/kg) and (249.3±124.7 to 2580.1±266.9Bq/kg), respectively. Radon concentration, radium equivalent activities, absorbed dose rate, excess lifetime cancer risk and the values of hazard indices were calculated for the measured samples to assess the radiation hazards arising from using those materials in the construction of dwellings. Radon concentration ranged between 89.2±12.0Bq/m(3) and 1141.0±225.0Bq/m(3). It was determined that Raeq values of samples conformed to world standards except for perlite and single samples of brick and Ahlat stone. Calculated values of absorbed dose rate ranged from 81.3±20.5 to 420.6±42.8nGy/h. ELCR values ranged from (1.8±0.3)×10(-3) to (9.0±1.0)×10(-3). All samples had ELCR values higher than the world average. The values of Hin and Hex varied from 0.35±0.11 to 1.78±0.18 and from 0.37±0.09 to 1.17±0.13, respectively. The results were compared with standard radioactivity values determined by international organizations and with similar studies. There would be a radiation risk for people living in buildings made of perlite, Ahlat-1 and Brick-3. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-Density, High-Bandwidth, Multilevel Holographic Memory
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
2008-01-01
A proposed holographic memory system would be capable of storing data at unprecedentedly high density, and its data transfer performance in both reading and writing would be characterized by exceptionally high bandwidth. The capabilities of the proposed system would greatly exceed even those of a state-of-the art memory system, based on binary holograms (in which each pixel value represents 0 or 1), that can hold .1 terabyte of data and can support a reading or writing rate as high as 1 Gb/s. The storage capacity of the state-of-theart system cannot be increased without also increasing the volume and mass of the system. However, in principle, the storage capacity could be increased greatly, without significantly increasing the volume and mass, if multilevel holograms were used instead of binary holograms. For example, a 3-bit (8-level) hologram could store 8 terabytes, or an 8-bit (256-level) hologram could store 256 terabytes, in a system having little or no more size and mass than does the state-of-the-art 1-terabyte binary holographic memory. The proposed system would utilize multilevel holograms. The system would include lasers, imaging lenses and other beam-forming optics, a block photorefractive crystal wherein the holograms would be formed, and two multilevel spatial light modulators in the form of commercially available deformable-mirror-device spatial light modulators (DMDSLMs) made for use in high speed input conversion of data up to 12 bits. For readout, the system would also include two arrays of complementary metal oxide/semiconductor (CMOS) photodetectors matching the spatial light modulators. The system would further include a reference-beam sterring device (equivalent of a scanning mirror), containing no sliding parts, that could be either a liquid-crystal phased-array device or a microscopic mirror actuated by a high-speed microelectromechanical system. Time-multiplexing and the multilevel nature of the DMDSLM would be exploited to enable writing and reading of multilevel holograms. The DMDSLM would also enable transfer of data at a rate of 7.6 Gb/s or perhaps somewhat higher.
Photonic ADC: overcoming the bottleneck of electronic jitter.
Khilo, Anatol; Spector, Steven J; Grein, Matthew E; Nejadmalayeri, Amir H; Holzwarth, Charles W; Sander, Michelle Y; Dahlem, Marcus S; Peng, Michael Y; Geis, Michael W; DiLello, Nicole A; Yoon, Jung U; Motamedi, Ali; Orcutt, Jason S; Wang, Jade P; Sorace-Agaskar, Cheryl M; Popović, Miloš A; Sun, Jie; Zhou, Gui-Rong; Byun, Hyunil; Chen, Jian; Hoyt, Judy L; Smith, Henry I; Ram, Rajeev J; Perrott, Michael; Lyszczarz, Theodore M; Ippen, Erich P; Kärtner, Franz X
2012-02-13
Accurate conversion of wideband multi-GHz analog signals into the digital domain has long been a target of analog-to-digital converter (ADC) developers, driven by applications in radar systems, software radio, medical imaging, and communication systems. Aperture jitter has been a major bottleneck on the way towards higher speeds and better accuracy. Photonic ADCs, which perform sampling using ultra-stable optical pulse trains generated by mode-locked lasers, have been investigated for many years as a promising approach to overcome the jitter problem and bring ADC performance to new levels. This work demonstrates that the photonic approach can deliver on its promise by digitizing a 41 GHz signal with 7.0 effective bits using a photonic ADC built from discrete components. This accuracy corresponds to a timing jitter of 15 fs - a 4-5 times improvement over the performance of the best electronic ADCs which exist today. On the way towards an integrated photonic ADC, a silicon photonic chip with core photonic components was fabricated and used to digitize a 10 GHz signal with 3.5 effective bits. In these experiments, two wavelength channels were implemented, providing the overall sampling rate of 2.1 GSa/s. To show that photonic ADCs with larger channel counts are possible, a dual 20-channel silicon filter bank has been demonstrated.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
New scene change control scheme based on pseudoskipped picture
NASA Astrophysics Data System (ADS)
Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.
1997-01-01
A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.
Detection of LSB+/-1 steganography based on co-occurrence matrix and bit plane clipping
NASA Astrophysics Data System (ADS)
Abolghasemi, Mojtaba; Aghaeinia, Hassan; Faez, Karim; Mehrabi, Mohammad Ali
2010-01-01
Spatial LSB+/-1 steganography changes smooth characteristics between adjoining pixels of the raw image. We present a novel steganalysis method for LSB+/-1 steganography based on feature vectors derived from the co-occurrence matrix in the spatial domain. We investigate how LSB+/-1 steganography affects the bit planes of an image and show that it changes more least significant bit (LSB) planes of it. The co-occurrence matrix is derived from an image in which some of its most significant bit planes are clipped. By this preprocessing, in addition to reducing the dimensions of the feature vector, the effects of embedding were also preserved. We compute the co-occurrence matrix in different directions and with different dependency and use the elements of the resulting co-occurrence matrix as features. This method is sensitive to the data embedding process. We use a Fisher linear discrimination (FLD) classifier and test our algorithm on different databases and embedding rates. We compare our scheme with the current LSB+/-1 steganalysis methods. It is shown that the proposed scheme outperforms the state-of-the-art methods in detecting the LSB+/-1 steganographic method for grayscale images.
32-Bit-Wide Memory Tolerates Failures
NASA Technical Reports Server (NTRS)
Buskirk, Glenn A.
1990-01-01
Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.
Eliminating ambiguity in digital signals
NASA Technical Reports Server (NTRS)
Weber, W. J., III
1979-01-01
Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.
The CO2 laser frequency stability measurements
NASA Technical Reports Server (NTRS)
Johnson, E. H., Jr.
1973-01-01
Carbon dioxide laser frequency stability data are considered for a receiver design that relates to maximum Doppler frequency and its rate of change. Results show that an adequate margin exists in terms of data acquisition, Doppler tracking, and bit error rate as they relate to laser stability and transmitter power.
Pattern transfer from nanoparticle arrays
NASA Astrophysics Data System (ADS)
Hogg, Charles R., III
This project contributes to the long-term extensibility of bit-patterned media (BPM), by removing obstacles to using a new and smaller class of self-assembling materials: surfactant-coated nanoparticles. Self-assembly rapidly produces regular patterns of small features over large areas. If these patterns can be used as templates for magnetic bits, the resulting media would have both high capacity and high bit density. The data storage industry has identified block copolymers (BCP) as the self-assembling technology for the first generation of BPM. Arrays of surfactant-coated nanoparticles have long shown higher feature densities than BCP, but their patterns could not previously be transferred into underlying substrates. I identify one key obstacle that has prevented this pattern transfer: the particles undergo a disordering transition during etching which I have called "cracking". I compare several approaches to measuring the degree of cracking, and I develop two novel techniques for preventing it and allowing pattern transfer. I demonstrate two different kinds of pattern transfer: positive (dots) and negative (antidots). To make dots, I etch the substrate between the particles with a directional CF4-based reactive ion etch (RIE). I find the ultrasmall gaps (just 2 nm) cause a tremendous slowdown in the etch rate, by a factor of 10 or more---an observation of fundamental significance for any pattern transfer at ultrahigh bit densities. Antidots are made by depositing material in the interstices, then removing the particles to leave behind a contiguous inorganic lattice. This lattice can itself be used as an etch mask for CF4-based RIE, in order to increase the height contrast. The antidot process promises great generality in choice of materials, both for the antidot lattice and the particles themselves; here, I present lattices of Al and Cr, ternplated from arrays of 13.7 nm-diameter Fe3O4 or 30 nm-diameter MnO nanoparticles. The fidelity of transfer is also noticeably better for antidots than for dots, making antidots the more promising technique for industrial applications. The smallest period for which I have shown pattern transfer (15.7 nm) is comparable to (but slightly smaller than) the smallest period currently shown for pattern transfer from block copolymers (17 nm); hence, my results compare favorably with the state of the art. Ultimately, by demonstrating that surfactant-coated nanoparticles can be used as pattern masks, this work increases their viability as an option to continue the exponential growth of bit density in magnetic storage media.
Development of a compact and cost effective multi-input digital signal processing system
NASA Astrophysics Data System (ADS)
Darvish-Molla, Sahar; Chin, Kenrick; Prestwich, William V.; Byun, Soo Hyun
2018-01-01
A prototype digital signal processing system (DSP) was developed using a microcontroller interfaced with a 12-bit sampling ADC, which offers a considerably inexpensive solution for processing multiple detectors with high throughput. After digitization of the incoming pulses, in order to maximize the output counting rate, a simple algorithm was employed for pulse height analysis. Moreover, an algorithm aiming at the real-time pulse pile-up deconvolution was implemented. The system was tested using a NaI(Tl) detector in comparison with a traditional analogue and commercial digital systems for a variety of count rates. The performance of the prototype system was consistently superior to the analogue and the commercial digital systems up to the input count rate of 61 kcps while was slightly inferior to the commercial digital system but still superior to the analogue system in the higher input rates. Considering overall cost, size and flexibility, this custom made multi-input digital signal processing system (MMI-DSP) was the best reliable choice for the purpose of the 2D microdosimetric data collection, or for any measurement in which simultaneous multi-data collection is required.
Directly connecting the Very Long Baseline Array
NASA Astrophysics Data System (ADS)
Hunt, Gareth; Romney, Jonathan D.; Walker, R. Craig
2002-11-01
At present, the signals received by the 10 antennas of the Very Long Baseline Array (VLBA) are recorded on instrumentation tapes. These tapes are then shipped from the antenna locations - distributed across the mainland USA, the US Virgin Islands, and Hawaii - to the processing center in Socorro, New Mexico. The Array operates today at a mean sustained data rate of 128 Mbps per antenna, but peak rates of 256 Mbps and 512 Mbps are also used. Transported tapes provide the cheapest method of attaining these bit rates. The present tape system derives from wideband recording techniques dating back to the late 1960s, and has been in use since the commissioning of the VLBA in 1993. It is in need of replacement on a time scale of a few years. Further, plans are being developed which would increase the required data rates to 1 Gbps in 5 years and 100 Gbps in 10 years. With the advent of higher performance networks, it should be possible to transmit the data directly to the processing center. However, achieving this connectivity is complicated by the remoteness of the antennas -
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hariharan, P.R.; Azar, J.J.
1996-09-01
A good majority of all oilwell drilling occurs in shale and other clay-bearing rocks. In the light of relatively fewer studies conducted, the problem of bit-balling in PDC bits while drilling shale has been addressed with the primary intention of attempting to quantify the degree of balling, as well as to investigate the influence of bit design and confining pressures. A series of full-scale laboratory drilling tests under simulated down hole conditions were conducted utilizing seven different PDC bits in Catoosa shale. Test results have indicated that the non-dimensional parameter R{sub d} [(bit torque).(weight-on-bit)/(bit diameter)] is a good indicator ofmore » the degree of bit-balling and that it correlated well with Specific-Energy. Furthermore, test results have shown bit-profile and bit-hydraulic design to be key parameters of bit design that dictate the tendency of balling in shales under a given set of operating conditions. A bladed bit was noticed to ball less compared to a ribbed or open-faced bit. Likewise, related to bit profile, test results have indicated that the parabolic profile has a lesser tendency to ball compared to round and flat profiles. The tendency of PDC bits to ball was noticed to increase with increasing confining pressures for the set of drilling conditions used.« less
NASA Astrophysics Data System (ADS)
Wang, Max L.; Arbabian, Amin
2017-09-01
We propose and demonstrate an ultrasonic communication link using spatial degrees of freedom to increase data rates for deeply implantable medical devices. Low attenuation and millimeter wavelengths make ultrasound an ideal communication medium for miniaturized low-power implants. While a small spectral bandwidth has drastically limited achievable data rates in conventional ultrasonic implants, a large spatial bandwidth can be exploited by using multiple transducers in a multiple-input/multiple-output system to provide spatial multiplexing gain without additional power, larger bandwidth, or complicated packaging. We experimentally verify the communication link in mineral oil with a transmitter and a receiver 5 cm apart, each housing two custom-designed mm-sized piezoelectric transducers operating at the same frequency. Two streams of data modulated with quadrature phase-shift keying at 125 kbps are simultaneously transmitted and received on both channels, effectively doubling the data rate to 250 kbps with a measured bit error rate below 10-4. We also evaluate the performance and robustness of the channel separation network by testing the communication link after introducing position offsets. These results demonstrate the potential of spatial multiplexing to enable more complex implant applications requiring higher data rates.
Variable-rate optical communication through the turbulent atmosphere. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Levitt, B. K.
1971-01-01
It was demonstrated that the data transmitter can extract real time, channel state information by processing the field received when a pilot tone is sent from the data receiver to the data transmitter. Based on these channel measurements, optimal variable rate techniques were derived and significant improvements in system perforamnce were obtained, particularly at low bit error rates.
VCSEL-based fiber optic link for avionics: implementation and performance analyses
NASA Astrophysics Data System (ADS)
Shi, Jieqin; Zhang, Chunxi; Duan, Jingyuan; Wen, Huaitao
2006-11-01
A Gb/s fiber optic link with built-in test capability (BIT) basing on vertical-cavity surface-emitting laser (VCSEL) sources for military avionics bus for next generation has been presented in this paper. To accurately predict link performance, statistical methods and Bit Error Rate (BER) measurements have been examined. The results show that the 1Gb/s fiber optic link meets the BER requirement and values for link margin can reach up to 13dB. Analysis shows that the suggested photonic network may provide high performance and low cost interconnections alternative for future military avionics.
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
Electromagnetic Effects on System Reliability
2000-02-01
1 to +3% (prop to ICC), 13 parts no change + 1 to +2%; 10 parts deer (-6%) +2 to 4%; 10 prts deer (-6 to - 8 %) 47 48 CMRR few...but drift toward 0A 55 56 Gain fluctuated a bit Fluctuated a bit 56 b/ Slew rate + 1 to +3% (prop to ICC) +2 to 4%; 10 prts deer (-6 to - 8 %) 57 M...little change 10 +0 to +3%, tracked ICC + 1 to +2% + 1 to 2% small increase, 8 parts deer Plastic, +25C 11 +/- 400 nV/V +/- 300 nV/V +/- 300 nVA/ +/-
NASA Astrophysics Data System (ADS)
Granot, Er'el; Zaibel, Reuven; Narkiss, Niv; Ben-Ezra, Shalva; Chayet, Haim; Shahar, Nir; Sternklar, Shmuel; Tsadka, Sagie; Prucnal, Paul R.
2005-12-01
In this paper we investigate the wavelength conversion and regeneration properties of a tunable all-optical signal regenerator (TASR). In the TASR, the wavelength conversion is done by a semiconductor optical amplifier, which is incorporated in an asymmetric Sagnac loop (ASL). We demonstrate both theoretically and experimentally that the ASL regenerates the incident signal's bit pattern, reduces its noise, increases the extinction ratio (which in many aspects is equivalent to noise reduction) and improves its bit-error rate. We also demonstrate the general behavior of the TASR with a numerical simulation.
The Venus Balloon Project telemetry processing
NASA Technical Reports Server (NTRS)
Urech, J. M.; Chamarro, A.; Morales, J. L.; Urech, M. A.
1986-01-01
The peculiarities of the Venus Balloon telemetry system required the development of a new methodology for the telemetry processing, since the capabilities of the Deep Space Network (DSN) telemetry system do not include burst processing of short frames with two different bit rates and first bit acquisition. A software package was produced for the non-real time detection, demodulation, and decoding of the telemetry streams obtained from an open loop recording utilizing the DSN spectrum processing subsystem-radio science (DSP-RS). A general description of the resulting software package (DMO-5539-SP) and its adaptability to the real mission's variations is contained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J.M.; Sheppard, M.C.; Houwen, O.H.
Previous work on shale mechanical properties has focused on the slow deformation rates appropriate to wellbore deformation. Deformation of shale under a drill bit occurs at a very high rate, and the failure properties of the rock under these conditions are crucial in determining bit performance and in extracting lithology and pore-pressure information from drilling parameters. Triaxial tests were performed on two nonswelling shales under a wide range of strain rates and confining and pore pressures. At low strain rates, when fluid is relatively free to move within the shale, shale deformation and failure are governed by effective stress ormore » pressure (i.e., total confining pressure minus pore pressure), as is the case for ordinary rock. If the pore pressure in the shale is high, increasing the strain rate beyond about 0.1%/sec causes large increases in the strength and ductility of the shale. Total pressure begins to influence the strength. At high stain rates, the influence of effective pressure decreases, except when it is very low (i.e., when pore pressure is very high); ductility then rises rapidly. This behavior is opposite that expected in ordinary rocks. This paper briefly discusses the reasons for these phenomena and their impact on wellbore and drilling problems.« less
High Data Rate Quantum Cryptography
NASA Astrophysics Data System (ADS)
Kwiat, Paul; Christensen, Bradley; McCusker, Kevin; Kumor, Daniel; Gauthier, Daniel
2015-05-01
While quantum key distribution (QKD) systems are now commercially available, the data rate is a limiting factor for some desired applications (e.g., secure video transmission). Most QKD systems receive at most a single random bit per detection event, causing the data rate to be limited by the saturation of the single-photon detectors. Recent experiments have begun to explore using larger degree of freedoms, i.e., temporal or spatial qubits, to optimize the data rate. Here, we continue this exploration using entanglement in multiple degrees of freedom. That is, we use simultaneous temporal and polarization entanglement to reach up to 8.3 bits of randomness per coincident detection. Due to current technology, we are unable to fully secure the temporal degree of freedom against all possible future attacks; however, by assuming a technologically-limited eavesdropper, we are able to obtain 23.4 MB/s secure key rate across an optical table, after error reconciliation and privacy amplification. In this talk, we will describe our high-rate QKD experiment, with a short discussion on our work towards extending this system to ship-to-ship and ship-to-shore communication, aiming to secure the temporal degree of freedom and to implement a 30-km free-space link over a marine environment.